Model Overview
Model Features
Model Capabilities
Use Cases
ЁЯЪА RoBERTa base model for Hindi language
A pre - trained model on Hindi language using masked language modeling (MLM) objective, offering solutions for Hindi natural language processing tasks.
ЁЯЪА Quick Start
You can use this model directly with a pipeline for masked language modeling:
>>> from transformers import pipeline
>>> unmasker = pipeline('fill - mask', model='flax - community/roberta - hindi')
>>> unmasker("рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж <mask> рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ")
[{'score': 0.3310680091381073,
'sequence': 'рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж рд╕рдлрд░ рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ',
'token': 1349,
'token_str': ' рд╕рдлрд░'},
{'score': 0.15317578613758087,
'sequence': 'рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж рдкрд▓ рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ',
'token': 848,
'token_str': ' рдкрд▓'},
{'score': 0.07826550304889679,
'sequence': 'рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж рд╕рдордп рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ',
'token': 453,
'token_str': ' рд╕рдордп'},
{'score': 0.06304813921451569,
'sequence': 'рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж рдкрд╣рд▓ рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ',
'token': 404,
'token_str': ' рдкрд╣рд▓'},
{'score': 0.058322224766016006,
'sequence': 'рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж рдЕрд╡рд╕рд░ рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ',
'token': 857,
'token_str': ' рдЕрд╡рд╕рд░'}]
тЬи Features
- Masked Language Modeling: The model is pre - trained using the MLM objective, which enables it to predict masked tokens in Hindi text.
- Interactive Demo: [A more interactive & comparison demo is available here](https://huggingface.co/spaces/flax - community/roberta - hindi).
ЁЯУж Installation
No specific installation steps are provided in the original README.
ЁЯТ╗ Usage Examples
Basic Usage
# Use the model for masked language modeling
from transformers import pipeline
unmasker = pipeline('fill - mask', model='flax - community/roberta - hindi')
result = unmasker("рд╣рдо рдЖрдкрдХреЗ рд╕реБрдЦрдж <mask> рдХреА рдХрд╛рдордирд╛ рдХрд░рддреЗ рд╣реИрдВ")
print(result)
Advanced Usage
There is no advanced usage example in the original README.
ЁЯУЪ Documentation
Model description
RoBERTa Hindi is a transformers model pretrained on a large corpus of Hindi data (a combination of mc4, oscar and indic - nlp datasets).
Training data
The RoBERTa Hindi model was pretrained on the reunion of the following datasets:
- OSCAR is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the goclassy architecture.
- mC4 is a multilingual colossal, cleaned version of Common Crawl's web crawl corpus.
- [IndicGLUE](https://indicnlp.ai4bharat.org/indic - glue/) is a natural language understanding benchmark.
- Samanantar is a parallel corpora collection for Indic language.
- [Hindi Text Short and Large Summarization Corpus](https://www.kaggle.com/disisbig/hindi - text - short - and - large - summarization - corpus) is a collection of ~180k articles with their headlines and summary collected from Hindi News Websites.
- [Hindi Text Short Summarization Corpus](https://www.kaggle.com/disisbig/hindi - text - short - summarization - corpus) is a collection of ~330k articles with their headlines collected from Hindi News Websites.
- Old Newspapers Hindi is a cleaned subset of HC Corpora newspapers.
Training procedure
Preprocessing
The texts are tokenized using a byte version of Byte - Pair Encoding (BPE) and a vocabulary size of 50265. The inputs of the model take pieces of 512 contiguous token that may span over documents. The beginning of a new document is marked with <s>
and the end of one by </s>
.
- We had to perform cleanup of mC4 and oscar datasets by removing all non - hindi (non Devanagari) characters from the datasets.
- We tried to filter out evaluation set of WikiNER of [IndicGlue](https://indicnlp.ai4bharat.org/indic - glue/) benchmark by manual labelling where the actual labels were not correct and modifying the downstream evaluation dataset.
The details of the masking procedure for each sentence are the following:
- 15% of the tokens are masked.
- In 80% of the cases, the masked tokens are replaced by
<mask>
. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace.
- In the 10% remaining cases, the masked tokens are left as is. Contrary to BERT, the masking is done dynamically during pretraining (e.g., it changes at each epoch and is not fixed).
Pretraining
The model was trained on Google Cloud Engine TPUv3 - 8 machine (with 335 GB of RAM, 1000 GB of hard drive, 96 CPU cores). A randomized shuffle of combined dataset of mC4, oscar and other datasets listed above was used to train the model. Training logs are present in [wandb](https://wandb.ai/wandb/hf - flax - roberta - hindi).
Evaluation Results
RoBERTa Hindi is evaluated on various downstream tasks. The results are summarized below.
Task | Task Type | IndicBERT | HindiBERTa | Indic Transformers Hindi BERT | RoBERTa Hindi Guj San | RoBERTa Hindi |
---|---|---|---|---|---|---|
BBC News Classification | Genre Classification | 76.44 | 66.86 | 77.6 | 64.9 | 73.67 |
WikiNER | Token Classification | - | 90.68 | 95.09 | 89.61 | 92.76 |
IITP Product Reviews | Sentiment Analysis | 78.01 | 73.23 | 78.39 | 66.16 | 75.53 |
IITP Movie Reviews | Sentiment Analysis | 60.97 | 52.26 | 70.65 | 49.35 | 61.29 |
ЁЯФз Technical Details
- Tokenization: Byte - Pair Encoding (BPE) with a vocabulary size of 50265.
- Masking Strategy: 15% of tokens are masked, with different replacement rules during pretraining.
- Training Environment: Google Cloud Engine TPUv3 - 8 machine.
ЁЯУД License
No license information is provided in the original README.
Team Members
- Aman K (amankhandelia)
- Haswanth Aekula (hassiahk)
- Kartik Godawat ([dk - crazydiv](https://huggingface.co/dk - crazydiv))
- Prateek Agrawal (prateekagrawal)
- Rahul Dev (mlkorra)
Credits
Huge thanks to Hugging Face ЁЯдЧ & Google Jax/Flax team for such a wonderful community week, especially for providing such massive computing resources. Big thanks to Suraj Patil & Patrick von Platen for mentoring during the whole week.


