R

Ruperta Base

Developed by mrm8488
RuPERTa is a case-insensitive RoBERTa model trained on a large Spanish corpus, using RoBERTa's improved pre-training methods, suitable for various Spanish NLP tasks.
Downloads 39
Release Time : 3/2/2022

Model Overview

RuPERTa is a Spanish pre-trained language model based on the RoBERTa architecture, optimized through improved training processes (such as longer training time, larger batch processing, etc.), supporting tasks like part-of-speech tagging and named entity recognition.

Model Features

Spanish Optimization
Trained on a large Spanish corpus, specifically optimized for Spanish NLP tasks
RoBERTa Improvements
Utilizes RoBERTa's improved pre-training methods, including longer training time, larger batch processing, and dynamic masking patterns
Case-insensitive Design
Case-insensitive version of the model, enhancing the ability to process case-insensitive text

Model Capabilities

Text filling
Part-of-speech tagging
Named entity recognition
Spanish text understanding

Use Cases

Natural Language Processing
Part-of-speech tagging
Performs part-of-speech tagging on Spanish text
F1 score 97.39 (on specific dataset)
Named entity recognition
Identifies named entities (person names, locations, organizations, etc.) in Spanish text
F1 score 77.55 (on specific dataset)
Text filling
Predicts missing words in Spanish sentences
Examples available on the model page
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Ā© 2025AIbase