B

Bertin Base Xnli Es

Developed by bertin-project
A pre-trained model based on the Spanish RoBERTa-base architecture, fine-tuned for the XNLI dataset, optimized with Gaussian sampling for training data quality
Downloads 20
Release Time : 3/2/2022

Model Overview

This model is a Spanish RoBERTa-base trained from scratch, using Gaussian function sampling to filter training data, specifically optimized for XNLI task performance

Model Features

Gaussian Sampled Training Data
Uses Gaussian function to subsample the mc4 dataset, effectively filtering low-quality texts and duplicate content
512 Sequence Length
Supports sequence processing up to 512 tokens in length
XNLI Optimization
Specifically fine-tuned for cross-lingual natural language inference tasks

Model Capabilities

Natural Language Understanding
Cross-lingual Inference
Text Classification

Use Cases

Natural Language Processing
Cross-lingual Text Inference
Determines logical relationships between Spanish texts
Excellent performance on XNLI tasks
Text Classification
Performs classification tasks on Spanish texts
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase