R

Roberta Large Finetuned Abbr

Developed by surrey-nlp
A named entity recognition model fine-tuned on the PLOD-unfiltered dataset based on RoBERTa-large, specifically designed for identifying abbreviations and terms in scientific texts.
Downloads 64
Release Time : 4/20/2022

Model Overview

This model, achieved by fine-tuning RoBERTa-large, is specifically designed for token classification tasks, particularly for recognizing specific types of named entities (such as abbreviations and terms) in scientific literature.

Model Features

High-precision abbreviation recognition
Accurately identifies various abbreviations and terms in scientific texts, achieving an F1 score of 0.9645.
Powerful representation capability based on RoBERTa-large
Leverages the strong language understanding capabilities of the RoBERTa-large pre-trained model, making it particularly suitable for handling complex terminology in scientific literature.
Domain-specific optimization
Specially fine-tuned on the PLOD-unfiltered scientific dataset, making it well-suited for processing academic and technical documents.

Model Capabilities

Named entity recognition in scientific texts
Abbreviation detection
Term extraction
Token classification

Use Cases

Academic research
Scientific literature processing
Automatically identifies professional terms and abbreviations in research papers
Improves literature processing efficiency with an accuracy of 96.08%
Information extraction
Technical document analysis
Extracts key terms from technical manuals and patent documents
Achieves an F1 score of 0.9645
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase