🚀 Comprehend-it-multilang-base
Comprehend-it-multilang-base is an encoder - decoder model based on [mT5 - base](https://huggingface.co/google/mt5 - base). It was trained on multi - language natural language inference datasets and multiple text classification datasets. This model can better understand the context of text and verbalized labels as the encoder and decoder of the model encode the text and labels respectively. The zero - shot classifier supports nearly 100 languages and can handle cases where labels and text are in different languages.
✨ Features
- Contextual Understanding: Demonstrates a better contextual understanding of text and verbalized labels as both inputs are encoded by different parts of the model (encoder and decoder).
- Multilingual Support: The zero - shot classifier supports nearly 100 languages and can work in both directions, allowing labels and text to belong to different languages.
📦 Installation
Install the necessary libraries before using it. Because of the different model architecture, we can't use transformers' "zero - shot - classification" pipeline. For that, we developed a special library called LiqFit. If you haven't installed the sentencepiece library, you need to install it as well to use T5 tokenizers.
pip install liqfit sentencepiece
💻 Usage Examples
Basic Usage
The model can be loaded with the zero - shot - classification
pipeline like so:
from liqfit.pipeline import ZeroShotClassificationPipeline
from liqfit.models import T5ForZeroShotClassification
from transformers import T5Tokenizer
model = T5ForZeroShotClassification.from_pretrained('knowledgator/comprehend_it - multilingual - t5 - base')
tokenizer = T5Tokenizer.from_pretrained('knowledgator/comprehend_it - multilingual - t5 - base')
classifier = ZeroShotClassificationPipeline(model = model, tokenizer = tokenizer,
hypothesis_template = '{}', encoder_decoder = True)
You can then use this pipeline to classify sequences into any of the class names you specify.
sequence_to_classify = "one day I will see the world"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels, multi_label = False)
{'sequence': 'one day I will see the world',
'labels': ['travel', 'cooking', 'dancing'],
'scores': [0.7350383996963501, 0.1484801471233368, 0.1164814680814743]}
Advanced Usage
The model can handle text and labels in different languages.
sequence_to_classify = "Одного дня я побачу цей світ"
candidate_labels = ['travel', 'cooking', 'dancing']
classifier(sequence_to_classify, candidate_labels, multi_label = False)
{'sequence': 'Одного дня я побачу цей світ',
'labels': ['travel', 'cooking', 'dancing'],
'scores': [0.7676175236701965, 0.15484870970249176, 0.07753374427556992]}
📚 Documentation
Model Information
Property |
Details |
Supported Languages |
multilingual, af, am, ar, az, be, bg, bn, ca, ceb, co, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, fy, ga, gd, gl, gu, ha, haw, hi, hmn, ht, hu, hy, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, und, ur, uz, vi, xh, yi, yo, zh, zu |
License |
apache - 2.0 |
Datasets |
multi_nli, xnli, dbpedia_14, SetFit/bbc - news, squad_v2, race, knowledgator/events_classification_biotech, facebook/anli, SetFit/qnli |
Metrics |
accuracy, f1 |
Pipeline Tag |
zero - shot - classification |
Tags |
classification, information - extraction, zero - shot |
Benchmarking
Below are the F1 scores on several text classification datasets. All tested models were not fine - tuned on those datasets and were tested in a zero - shot setting.
Model |
IMDB |
AG_NEWS |
Emotions |
[Bart - large - mnli (407 M)](https://huggingface.co/facebook/bart - large - mnli) |
0.89 |
0.6887 |
0.3765 |
[Deberta - base - v3 (184 M)](https://huggingface.co/cross - encoder/nli - deberta - v3 - base) |
0.85 |
0.6455 |
0.5095 |
[Comprehendo (184M)](https://huggingface.co/knowledgator/comprehend_it - base) |
0.90 |
0.7982 |
0.5660 |
[Comprehendo - multi - lang (390M)](https://huggingface.co/knowledgator/comprehend - it - multilang - base) |
0.88 |
0.8372 |
- |
SetFit [BAAI/bge - small - en - v1.5 (33.4M)](https://huggingface.co/BAAI/bge - small - en - v1.5) |
0.86 |
0.5636 |
0.5754 |
📄 License
This project is licensed under the Apache - 2.0 license.
Future Reading
Check our blogpost - "The new milestone in zero - shot capabilities (it’s not Generative AI).", where we highlighted possible use - cases of the model and why next - token prediction is not the only way to achieve amazing zero - shot capabilities. While most of the AI industry is focused on generative AI and decoder - based models, we are committed to developing encoder - based models. We aim to achieve the same level of generalization for such models as their decoder brothers. Encoders have several wonderful properties, such as bidirectional attention, and they are the best choice for many information extraction tasks in terms of efficiency and controllability.
💡 Feedback
We value your input! Share your feedback and suggestions to help us improve our models.
Fill out the feedback form
👥 Join Our Discord
Connect with our community on Discord for news, support, and discussion about our models.
Join Discord