D

Distilbert Base Es Multilingual Cased

Developed by Recognai
This is a Spanish subset model extracted from distilbert-base-multilingual-cased, a distilled version of the BERT base multilingual model with a smaller parameter size while retaining core functionality.
Downloads 76
Release Time : 3/2/2022

Model Overview

The model reduces the size of the original multilingual model by selecting only the most common Spanish tokens, decreasing the embedding layer size, making it suitable for Spanish text processing tasks.

Model Features

Lightweight Design
Compared to the original DistilmBERT's 134 million parameters, this model has only 63 million parameters, significantly reducing the model size.
Spanish Optimization
Optimized specifically for Spanish text processing by selecting the most common Spanish tokens.
Case-Sensitive
Capable of distinguishing between uppercase and lowercase, e.g., 'english' and 'English' are treated as different tokens.
Efficient Inference
As a variant of DistilBERT, it maintains high inference efficiency.

Model Capabilities

Masked Language Modeling
Spanish Text Understanding
Contextual Semantic Analysis

Use Cases

Text Completion
Sentence Completion
Predicts the word at the [MASK] position in a sentence
For example, input 'My name is Juan, I live in [MASK].' can predict the correct location.
Language Understanding
Spanish Text Analysis
Understands the semantics of Spanish text
Can be used for downstream tasks like sentiment analysis and topic classification.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase