Muppet Roberta Base
A large-scale multi-task representation model achieved through pre-finetuning, based on the RoBERTa architecture, outperforming the original roberta-base on GLUE and question answering tasks
Downloads 425
Release Time : 3/2/2022
Model Overview
This model is a large-scale multi-task pre-finetuned version based on the RoBERTa architecture, pre-trained with masked language modeling objectives, suitable for feature extraction and fine-tuning in natural language processing tasks
Model Features
Large-scale Multi-task Pre-finetuning
Training on multiple tasks during the pre-finetuning phase enhances the model's performance on downstream tasks
Outperforms Original RoBERTa-base
Demonstrates superior performance over the original roberta-base on GLUE and question answering tasks, with particularly significant improvements on small datasets
Bidirectional Context Understanding
Utilizes masked language modeling objectives to learn bidirectional representations of sentences
Model Capabilities
Text feature extraction
Sentence classification
Question answering systems
Natural language understanding
Use Cases
Text Classification
Sentiment Analysis
Classify text into positive/negative sentiments
Achieves 96.7% accuracy on the SST-2 dataset
Question Answering Systems
Reading Comprehension
Answer questions based on given text
Achieves 86.6% accuracy on the SQuAD dataset
Natural Language Inference
Textual Entailment
Determine the logical relationship between two sentences
Achieves 88.1% accuracy on the MNLI dataset
Featured Recommended AI Models
Š 2025AIbase