B

Bert Mini Finetuned Qnli

Developed by M-FAC
This model is a text classification model based on the BERT-mini architecture, fine-tuned on the QNLI dataset using the M-FAC second-order optimizer.
Downloads 11.93k
Release Time : 3/2/2022

Model Overview

The model is primarily used for Question-Answering Natural Language Inference (QNLI) tasks, achieving efficient fine-tuning through the M-FAC optimizer with performance close to Adam optimizer but with theoretical advantages.

Model Features

M-FAC second-order optimization
Utilizes advanced matrix-free approximate second-order optimization method, offering theoretical advantages over traditional Adam optimizer
Lightweight architecture
Based on BERT-mini architecture with fewer parameters while maintaining good performance
Robust performance
Stable performance on QNLI tasks with a standard deviation of only 0.13 across five runs

Model Capabilities

Text classification
Natural language inference
Question-answering system support

Use Cases

Education
Automated question-answering evaluation
Used to assess the logical consistency between student answers and questions
Achieved 83.9% accuracy on QNLI validation set
Customer service
Question relevance judgment
Determines the relevance between user queries and knowledge base answers
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase