M

Muppet Roberta Large

Developed by facebook
A large-scale multi-task pre-finetuned version of the RoBERTa-large model, excelling in GLUE and question-answering tasks, with significant improvements especially on small datasets.
Downloads 26
Release Time : 3/2/2022

Model Overview

This model is a transformer pre-trained via self-supervised learning on a large English corpus, using the Masked Language Modeling (MLM) objective to learn bidirectional representations of the English language. It is suitable for tasks such as sequence classification, token classification, or question answering.

Model Features

Large-scale Multi-task Pre-finetuning
Optimizes the model through pre-finetuning on multiple tasks, enhancing performance on downstream tasks, particularly effective on small datasets.
Bidirectional Representation Learning
Uses the Masked Language Modeling objective to learn bidirectional sentence representations, suitable for tasks requiring full sentence context.
Broad Downstream Task Applicability
Applicable to various natural language processing tasks such as sequence classification, token classification, and question answering.

Model Capabilities

Masked Language Modeling
Sequence Classification
Token Classification
Question Answering

Use Cases

Natural Language Processing
Text Classification
Performs tasks like sentiment analysis and topic classification on text.
Achieves 97.4 accuracy on the SST-2 dataset.
Question Answering System
Builds automated question-answering systems to respond to questions based on given text.
Achieves 89.4 F1 score on the SQuAD dataset.
Natural Language Inference
Determines the logical relationship (entailment, contradiction, or neutral) between two sentences.
Achieves 90.8 accuracy on the MNLI dataset.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase