# High-precision NLP

Orpo Med V3
Apache-2.0
This is a transformers model hosted on Hugging Face Hub. Specific functions and uses require further information.
Large Language Model Transformers
O
Jayant9928
2,852
3
Tookabert Base
Apache-2.0
TookaBERT is a family of encoder models trained on Persian, including base and large versions, suitable for various natural language processing tasks.
Large Language Model Transformers Other
T
PartAI
127
24
Llama2 Alpaca Sft 2epoch
Apache-2.0
This is a text generation model based on the Apache-2.0 license, with specific functions and uses to be supplemented.
Large Language Model Transformers English
L
zhangchuheng123
93
1
Ruroberta Large Rucola
Apache-2.0
A language acceptability classification model fine-tuned based on RuRoBERTa-large for judging grammatical correctness of Russian texts
Text Classification Transformers Other
R
RussianNLP
1,282
8
Nbme Roberta Large
MIT
A model fine-tuned based on roberta-large for specific task processing, with an evaluation loss value of 0.7825
Large Language Model Transformers
N
smeoni
35
0
Chinese Bert Wwm Finetuned Jd
Apache-2.0
This model is a fine-tuned version based on hfl/chinese-bert-wwm on an unknown dataset, suitable for Chinese text processing tasks.
Large Language Model Transformers
C
wangmiaobeng
24
0
Roberta Base Cuad
This model is a fine-tuned version of the RoBERTa base model, specifically optimized for contract understanding tasks using the CUAD dataset.
Large Language Model Transformers English
R
akdeniz27
249
0
Deberta V2 Xlarge Cuad
This model is a fine-tuned version of DeBERTa v2 XLarge, specifically optimized for contract understanding tasks, trained using the CUAD dataset.
Large Language Model Transformers English
D
akdeniz27
122
2
Distilbert Base Uncased Finetuned Cola 4
Apache-2.0
A fine-tuned model based on DistilBERT for the grammatical acceptability classification task, performing excellently on the evaluation set.
Large Language Model Transformers
D
fadhilarkan
6
0
Mrc Pretrained Roberta Large 1
KLUE-RoBERTa-large is a Korean pre-trained language model based on the RoBERTa architecture, developed by a Korean research team and optimized for Korean natural language processing tasks.
Large Language Model Transformers
M
this-is-real
14
0
Deberta V3 Small Finetuned Cola
MIT
This model is a fine-tuned version of DeBERTa-v3-small on the GLUE COLA dataset for linguistic acceptability judgment tasks.
Text Classification Transformers English
D
mrm8488
16
3
Indonesian Roberta Base Posp Tagger
MIT
This is a POS tagging model fine-tuned based on the Indonesian RoBERTa model, trained on the indonlu dataset for Indonesian text POS tagging tasks.
Sequence Labeling Transformers Other
I
w11wo
2.2M
7
Bert Base Uncased Mnli Sparse 70 Unstructured No Classifier
This model is fine-tuned from bert-base-uncased-sparse-70-unstructured on the MNLI task (GLUE benchmark), with the classifier layer removed for easier loading into other downstream tasks for training.
Large Language Model Transformers English
B
Intel
17
0
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase