Multilingual E5 Large Instruct Q5 0 GGUF
M
Multilingual E5 Large Instruct Q5 0 GGUF
Developed by yoeven
Multilingual E5 large instruction model supporting text embedding and classification tasks in multiple languages
Downloads 14
Release Time : 1/6/2025
Model Overview
A multilingual text embedding model based on intfloat/multilingual-e5-large-instruct, supporting a wide range of classification, clustering, and retrieval tasks
Model Features
Multilingual Support
Supports text processing in over 50 languages, including major and some lesser-known languages
High-Performance Classification
Outstanding performance in MTEB benchmarks, achieving over 96% accuracy in English classification tasks
Powerful Retrieval Capability
Excellent performance in multiple retrieval tasks, especially in bilingual text mining with accuracy up to 99%
Broad Applicability
Supports various NLP tasks such as classification, clustering, retrieval, and reranking
Model Capabilities
Text Classification
Text Clustering
Information Retrieval
Bilingual Text Mining
Semantic Similarity Calculation
Text Reranking
Use Cases
E-Commerce
Product Review Classification
Sentiment analysis and classification of multilingual product reviews
Achieved 56.7% accuracy in English Amazon review classification tasks
Counterfactual Review Detection
Identifying potentially misleading product reviews
Achieved 76.2% accuracy in English Amazon counterfactual classification tasks
Information Retrieval
Document Retrieval
Retrieving relevant information from large document collections
Achieved 49.2% average precision@10 on the ArguAna dataset
Bilingual Document Alignment
Automatically matching similar documents in different languages
Achieved 99.6% accuracy in BUCC bilingual mining tasks
Academic Research
Paper Clustering
Topic clustering of academic papers
Achieved 46.4% v_measure in Arxiv paper clustering tasks
🚀 multilingual-e5-large-instruct
This is a multilingual model based on intfloat/multilingual-e5-large-instruct
, which demonstrates excellent performance across various tasks in multiple languages.
📚 Documentation
Model Information
Property | Details |
---|---|
Model Type | multilingual-e5-large-instruct |
Base Model | intfloat/multilingual-e5-large-instruct |
License | MIT |
Supported Languages | multilingual, af, am, ar, as, az, be, bg, bn, br, bs, ca, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fr, fy, ga, gd, gl, gu, ha, he, hi, hr, hu, hy, id, is, it, ja, jv, ka, kk, km, kn, ko, ku, ky, la, lo, lt, lv, mg, mk, ml, mn, mr, ms, my, ne, nl, 'no', om, or, pa, pl, ps, pt, ro, ru, sa, sd, si, sk, sl, so, sq, sr, su, sv, sw, ta, te, th, tl, tr, ug, uk, ur, uz, vi, xh, yi, zh |
Model Performance
Classification Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
Classification | MTEB AmazonCounterfactualClassification (en) | accuracy | 76.23880597014924 |
Classification | MTEB AmazonCounterfactualClassification (en) | ap | 39.07351965022687 |
Classification | MTEB AmazonCounterfactualClassification (en) | f1 | 70.04836733862683 |
Classification | MTEB AmazonCounterfactualClassification (de) | accuracy | 66.71306209850107 |
Classification | MTEB AmazonCounterfactualClassification (de) | ap | 79.01499914759529 |
Classification | MTEB AmazonCounterfactualClassification (de) | f1 | 64.81951817560703 |
Classification | MTEB AmazonCounterfactualClassification (en-ext) | accuracy | 73.85307346326837 |
Classification | MTEB AmazonCounterfactualClassification (en-ext) | ap | 22.447519885878737 |
Classification | MTEB AmazonCounterfactualClassification (en-ext) | f1 | 61.0162730745633 |
Classification | MTEB AmazonCounterfactualClassification (ja) | accuracy | 76.04925053533191 |
Classification | MTEB AmazonCounterfactualClassification (ja) | ap | 23.44983217128922 |
Classification | MTEB AmazonCounterfactualClassification (ja) | f1 | 62.5723230907759 |
Classification | MTEB AmazonPolarityClassification | accuracy | 96.28742500000001 |
Classification | MTEB AmazonPolarityClassification | ap | 94.8449918887462 |
Classification | MTEB AmazonPolarityClassification | f1 | 96.28680923610432 |
Classification | MTEB AmazonReviewsClassification (en) | accuracy | 56.716 |
Classification | MTEB AmazonReviewsClassification (en) | f1 | 55.76510398266401 |
Classification | MTEB AmazonReviewsClassification (de) | accuracy | 52.99999999999999 |
Classification | MTEB AmazonReviewsClassification (de) | f1 | 52.00829994765178 |
Classification | MTEB AmazonReviewsClassification (es) | accuracy | 48.806000000000004 |
Classification | MTEB AmazonReviewsClassification (es) | f1 | 48.082345914983634 |
Classification | MTEB AmazonReviewsClassification (fr) | accuracy | 48.507999999999996 |
Classification | MTEB AmazonReviewsClassification (fr) | f1 | 47.68752844642045 |
Classification | MTEB AmazonReviewsClassification (ja) | accuracy | 47.709999999999994 |
Classification | MTEB AmazonReviewsClassification (ja) | f1 | 47.05870376637181 |
Classification | MTEB AmazonReviewsClassification (zh) | accuracy | 44.662000000000006 |
Classification | MTEB AmazonReviewsClassification (zh) | f1 | 43.42371965372771 |
Classification | MTEB Banking77Classification | accuracy | 85.73376623376623 |
Classification | MTEB Banking77Classification | f1 | 85.68480707214599 |
Classification | MTEB EmotionClassification | accuracy | 51.51 |
Classification | MTEB EmotionClassification | f1 | 47.632159862049896 |
Retrieval Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
Retrieval | MTEB ArguAna | map_at_1 | 31.721 |
Retrieval | MTEB ArguAna | map_at_10 | 49.221 |
Retrieval | MTEB ArguAna | map_at_100 | 49.884 |
Retrieval | MTEB ArguAna | map_at_1000 | 49.888 |
Retrieval | MTEB ArguAna | map_at_3 | 44.31 |
Retrieval | MTEB ArguAna | map_at_5 | 47.276 |
Retrieval | MTEB ArguAna | mrr_at_1 | 32.432 |
Retrieval | MTEB ArguAna | mrr_at_10 | 49.5 |
Retrieval | MTEB ArguAna | mrr_at_100 | 50.163000000000004 |
Retrieval | MTEB ArguAna | mrr_at_1000 | 50.166 |
Retrieval | MTEB ArguAna | mrr_at_3 | 44.618 |
Retrieval | MTEB ArguAna | mrr_at_5 | 47.541 |
Retrieval | MTEB ArguAna | ndcg_at_1 | 31.721 |
Retrieval | MTEB ArguAna | ndcg_at_10 | 58.384 |
Retrieval | MTEB ArguAna | ndcg_at_100 | 61.111000000000004 |
Retrieval | MTEB ArguAna | ndcg_at_1000 | 61.187999999999995 |
Retrieval | MTEB ArguAna | ndcg_at_3 | 48.386 |
Retrieval | MTEB ArguAna | ndcg_at_5 | 53.708999999999996 |
Retrieval | MTEB ArguAna | precision_at_1 | 31.721 |
Retrieval | MTEB ArguAna | precision_at_10 | 8.741 |
Retrieval | MTEB ArguAna | precision_at_100 | 0.991 |
Retrieval | MTEB ArguAna | precision_at_1000 | 0.1 |
Retrieval | MTEB ArguAna | precision_at_3 | 20.057 |
Retrieval | MTEB ArguAna | precision_at_5 | 14.609 |
Retrieval | MTEB ArguAna | recall_at_1 | 31.721 |
Retrieval | MTEB ArguAna | recall_at_10 | 87.411 |
Retrieval | MTEB ArguAna | recall_at_100 | 99.075 |
Retrieval | MTEB ArguAna | recall_at_1000 | 99.644 |
Retrieval | MTEB ArguAna | recall_at_3 | 60.171 |
Retrieval | MTEB ArguAna | recall_at_5 | 73.044 |
Retrieval | MTEB CQADupstackRetrieval | map_at_1 | 27.764166666666668 |
Retrieval | MTEB CQADupstackRetrieval | map_at_10 | 37.298166666666674 |
Retrieval | MTEB CQADupstackRetrieval | map_at_100 | 38.530166666666666 |
Retrieval | MTEB CQADupstackRetrieval | map_at_1000 | 38.64416666666667 |
Retrieval | MTEB CQADupstackRetrieval | map_at_3 | 34.484833333333334 |
Retrieval | MTEB CQADupstackRetrieval | map_at_5 | 36.0385 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_1 | 32.93558333333333 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_10 | 41.589749999999995 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_100 | 42.425333333333334 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_1000 | 42.476333333333336 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_3 | 39.26825 |
Retrieval | MTEB CQADupstackRetrieval | mrr_at_5 | 40.567083333333336 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_1 | 32.93558333333333 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_10 | 42.706583333333334 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_100 | 47.82483333333333 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_1000 | 49.95733333333334 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_3 | 38.064750000000004 |
Retrieval | MTEB CQADupstackRetrieval | ndcg_at_5 | 40.18158333333333 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_1 | 32.93558333333333 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_10 | 7.459833333333334 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_100 | 1.1830833333333335 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_1000 | 0.15608333333333332 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_3 | 17.5235 |
Retrieval | MTEB CQADupstackRetrieval | precision_at_5 | 12.349833333333333 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_1 | 27.764166666666668 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_10 | 54.31775 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_100 | 76.74350000000001 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_1000 | 91.45208333333332 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_3 | 41.23425 |
Retrieval | MTEB CQADupstackRetrieval | recall_at_5 | 46.73983333333334 |
Retrieval | MTEB ClimateFEVER | map_at_1 | 12.969 |
Retrieval | MTEB ClimateFEVER | map_at_10 | 21.584999999999997 |
Retrieval | MTEB ClimateFEVER | map_at_100 | 23.3 |
Retrieval | MTEB ClimateFEVER | map_at_1000 | 23.5 |
Retrieval | MTEB ClimateFEVER | map_at_3 | 18.218999999999998 |
Retrieval | MTEB ClimateFEVER | map_at_5 | 19.983 |
Retrieval | MTEB ClimateFEVER | mrr_at_1 | 29.316 |
Retrieval | MTEB ClimateFEVER | mrr_at_10 | 40.033 |
Retrieval | MTEB ClimateFEVER | mrr_at_100 | 40.96 |
Retrieval | MTEB ClimateFEVER | mrr_at_1000 | 41.001 |
Retrieval | MTEB ClimateFEVER | mrr_at_3 | 37.123 |
Retrieval | MTEB ClimateFEVER | mrr_at_5 | 38.757999999999996 |
Retrieval | MTEB ClimateFEVER | ndcg_at_1 | 29.316 |
Retrieval | MTEB ClimateFEVER | ndcg_at_10 | 29.858 |
Retrieval | MTEB ClimateFEVER | ndcg_at_100 | 36.756 |
Retrieval | MTEB ClimateFEVER | ndcg_at_1000 | 40.245999999999995 |
Retrieval | MTEB ClimateFEVER | ndcg_at_3 | 24.822 |
Retrieval | MTEB ClimateFEVER | ndcg_at_5 | 26.565 |
Retrieval | MTEB ClimateFEVER | precision_at_1 | 29.316 |
Retrieval | MTEB ClimateFEVER | precision_at_10 | 9.186 |
Retrieval | MTEB ClimateFEVER | precision_at_100 | 1.6549999999999998 |
Retrieval | MTEB ClimateFEVER | precision_at_1000 | 0.22999999999999998 |
Retrieval | MTEB ClimateFEVER | precision_at_3 | 18.436 |
Retrieval | MTEB ClimateFEVER | precision_at_5 | 13.876 |
Retrieval | MTEB ClimateFEVER | recall_at_1 | 12.969 |
Retrieval | MTEB ClimateFEVER | recall_at_10 | 35.142 |
Retrieval | MTEB ClimateFEVER | recall_at_100 | 59.143 |
Retrieval | MTEB ClimateFEVER | recall_at_1000 | 78.594 |
Retrieval | MTEB ClimateFEVER | recall_at_3 | 22.604 |
Retrieval | MTEB ClimateFEVER | recall_at_5 | 27.883000000000003 |
Retrieval | MTEB DBPedia | map_at_1 | 8.527999999999999 |
Retrieval | MTEB DBPedia | map_at_10 | 17.974999999999998 |
Retrieval | MTEB DBPedia | map_at_100 | 25.665 |
Retrieval | MTEB DBPedia | map_at_1000 | 27.406000000000002 |
Retrieval | MTEB DBPedia | map_at_3 | 13.017999999999999 |
Retrieval | MTEB DBPedia | map_at_5 | 15.137 |
Retrieval | MTEB DBPedia | mrr_at_1 | 62.5 |
Retrieval | MTEB DBPedia | mrr_at_10 | 71.891 |
Retrieval | MTEB DBPedia | mrr_at_100 | 72.294 |
Retrieval | MTEB DBPedia | mrr_at_1000 | 72.296 |
Retrieval | MTEB DBPedia | mrr_at_3 | 69.958 |
Retrieval | MTEB DBPedia | mrr_at_5 | 71.121 |
Retrieval | MTEB DBPedia | ndcg_at_1 | 50.875 |
Retrieval | MTEB DBPedia | ndcg_at_10 | 38.36 |
Retrieval | MTEB DBPedia | ndcg_at_100 | 44.235 |
Retrieval | MTEB DBPedia | ndcg_at_1000 | 52.154 |
Retrieval | MTEB DBPedia | ndcg_at_3 | 43.008 |
Retrieval | MTEB DBPedia | ndcg_at_5 | 40.083999999999996 |
Retrieval | MTEB DBPedia | precision_at_1 | 62.5 |
Retrieval | MTEB DBPedia | precision_at_10 | 30.0 |
Retrieval | MTEB DBPedia | precision_at_100 | 10.038 |
Retrieval | MTEB DBPedia | precision_at_1000 | 2.0869999999999997 |
Retrieval | MTEB DBPedia | precision_at_3 | 46.833000000000006 |
Retrieval | MTEB DBPedia | precision_at_5 | 38.800000000000004 |
Retrieval | MTEB DBPedia | recall_at_1 | 8.527999999999999 |
Retrieval | MTEB DBPedia | recall_at_10 | 23.828 |
Retrieval | MTEB DBPedia | recall_at_100 | 52.322 |
Retrieval | MTEB DBPedia | recall_at_1000 | 77.143 |
Retrieval | MTEB DBPedia | recall_at_3 | 14.136000000000001 |
Retrieval | MTEB DBPedia | recall_at_5 | 17.761 |
Retrieval | MTEB FEVER | map_at_1 | 60.734 |
Retrieval | MTEB FEVER | map_at_10 | 72.442 |
Retrieval | MTEB FEVER | map_at_100 | 72.735 |
Retrieval | MTEB FEVER | map_at_1000 | 72.75 |
Retrieval | MTEB FEVER | map_at_3 | 70.41199999999999 |
Retrieval | MTEB FEVER | map_at_5 | 71.80499999999999 |
Retrieval | MTEB FEVER | mrr_at_1 | 65.212 |
Retrieval | MTEB FEVER | mrr_at_10 | 76.613 |
Retrieval | MTEB FEVER | mrr_at_100 | 76.79899999999999 |
Retrieval | MTEB FEVER | mrr_at_1000 | 76.801 |
Retrieval | MTEB FEVER | mrr_at_3 | 74.8 |
Retrieval | MTEB FEVER | mrr_at_5 | 76.12400000000001 |
Retrieval | MTEB FEVER | ndcg_at_1 | 65.212 |
Retrieval | MTEB FEVER | ndcg_at_10 | 77.988 |
Retrieval | MTEB FEVER | ndcg_at_100 | 79.167 |
Retrieval | MTEB FEVER | ndcg_at_1000 | 79.452 |
Retrieval | MTEB FEVER | ndcg_at_3 | 74.362 |
Retrieval | MTEB FEVER | ndcg_at_5 | 76.666 |
Retrieval | MTEB FEVER | precision_at_1 | 65.212 |
Retrieval | MTEB FEVER | precision_at_10 | 10.003 |
Retrieval | MTEB FEVER | precision_at_100 | 1.077 |
Retrieval | MTEB FEVER | precision_at_1000 | 0.11199999999999999 |
Retrieval | MTEB FEVER | precision_at_3 | 29.518 |
Retrieval | MTEB FEVER | precision_at_5 | 19.016 |
Retrieval | MTEB FEVER | recall_at_1 | 60.734 |
Retrieval | MTEB FEVER | recall_at_10 | 90.824 |
Retrieval | MTEB FEVER | recall_at_100 | 95.71600000000001 |
Retrieval | MTEB FEVER | recall_at_1000 | 97.577 |
Retrieval | MTEB FEVER | recall_at_3 | 81.243 |
Retrieval | MTEB FEVER | recall_at_5 | 86.90299999999999 |
Retrieval | MTEB FiQA2018 | map_at_1 | 23.845 |
Retrieval | MTEB FiQA2018 | map_at_10 | 39.281 |
Retrieval | MTEB FiQA2018 | map_at_100 | 41.422 |
Retrieval | MTEB FiQA2018 | map_at_1000 | 41.593 |
Retrieval | MTEB FiQA2018 | map_at_3 | 34.467 |
Retrieval | MTEB FiQA2018 | map_at_5 | 37.017 |
Retrieval | MTEB FiQA2018 | mrr_at_1 | 47.531 |
Retrieval | MTEB FiQA2018 | mrr_at_10 | 56.204 |
Retrieval | MTEB FiQA2018 | mrr_at_100 | 56.928999999999995 |
Retrieval | MTEB FiQA2018 | mrr_at_1000 | 56.962999999999994 |
Retrieval | MTEB FiQA2018 | mrr_at_3 | 54.115 |
Retrieval | MTEB FiQA2018 | mrr_at_5 | 55.373000000000005 |
Retrieval | MTEB FiQA2018 | ndcg_at_1 | 47.531 |
Retrieval | MTEB FiQA2018 | ndcg_at_10 | 47.711999999999996 |
Retrieval | MTEB FiQA2018 | ndcg_at_100 | 54.510999999999996 |
Retrieval | MTEB FiQA2018 | ndcg_at_1000 | 57.103 |
Retrieval | MTEB FiQA2018 | ndcg_at_3 | 44.145 |
Retrieval | MTEB FiQA2018 | ndcg_at_5 | 45.032 |
Retrieval | MTEB FiQA2018 | precision_at_1 | 47.531 |
Retrieval | MTEB FiQA2018 | precision_at_10 | 13.194 |
Retrieval | MTEB FiQA2018 | precision_at_100 | 2.045 |
Retrieval | MTEB FiQA2018 | precision_at_1000 | 0.249 |
Retrieval | MTEB FiQA2018 | precision_at_3 | 29.424 |
Retrieval | MTEB FiQA2018 | precision_at_5 | 21.451 |
Retrieval | MTEB FiQA2018 | recall_at_1 | 23.845 |
Retrieval | MTEB FiQA2018 | recall_at_10 | 54.967 |
Retrieval | MTEB FiQA2018 | recall_at_100 | 79.11399999999999 |
Retrieval | MTEB FiQA2018 | recall_at_1000 | 94.56700000000001 |
Retrieval | MTEB FiQA2018 | recall_at_3 | 40.256 |
Retrieval | MTEB FiQA2018 | recall_at_5 | 46.215 |
Retrieval | MTEB HotpotQA | map_at_1 | 37.819 |
Retrieval | MTEB HotpotQA | map_at_10 | 60.889 |
Retrieval | MTEB HotpotQA | map_at_100 | 61.717999999999996 |
Retrieval | MTEB HotpotQA | map_at_1000 | 61.778 |
Retrieval | MTEB HotpotQA | map_at_3 | 57.254000000000005 |
Retrieval | MTEB HotpotQA | map_at_5 | 59.541 |
Retrieval | MTEB HotpotQA | mrr_at_1 | 75.638 |
Retrieval | MTEB HotpotQA | mrr_at_10 | 82.173 |
Retrieval | MTEB HotpotQA | mrr_at_100 | 82.362 |
Retrieval | MTEB HotpotQA | mrr_at_1000 | 82.37 |
Retrieval | MTEB HotpotQA | mrr_at_3 | 81.089 |
Retrieval | MTEB HotpotQA | mrr_at_5 | 81.827 |
Retrieval | MTEB HotpotQA | ndcg_at_1 | 75.638 |
Retrieval | MTEB HotpotQA | ndcg_at_10 | 69.317 |
Retrieval | MTEB HotpotQA | ndcg_at_100 | 72.221 |
Retrieval | MTEB HotpotQA | ndcg_at_1000 | 73.382 |
Retrieval | MTEB HotpotQA | ndcg_at_3 | 64.14 |
Retrieval | MTEB HotpotQA | ndcg_at_5 | 67.07600000000001 |
Retrieval | MTEB HotpotQA | precision_at_1 | 75.638 |
Retrieval | MTEB HotpotQA | precision_at_10 | 14.704999999999998 |
Retrieval | MTEB HotpotQA | precision_at_100 | 1.698 |
Retrieval | MTEB HotpotQA | precision_at_1000 | 0.185 |
Retrieval | MTEB HotpotQA | precision_at_3 | 41.394999999999996 |
Retrieval | MTEB HotpotQA | precision_at_5 | 27.162999999999997 |
Clustering Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
Clustering | MTEB ArxivClusteringP2P | v_measure | 46.40419580759799 |
Clustering | MTEB ArxivClusteringS2S | v_measure | 40.48593255007969 |
Clustering | MTEB BiorxivClusteringP2P | v_measure | 40.935218072113855 |
Clustering | MTEB BiorxivClusteringS2S | v_measure | 36.276389017675264 |
Reranking Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
Reranking | MTEB AskUbuntuDupQuestions | map | 63.889179122289995 |
Reranking | MTEB AskUbuntuDupQuestions | mrr | 77.61146286769556 |
STS Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
STS | MTEB BIOSSES | cos_sim_pearson | 88.15075203727929 |
STS | MTEB BIOSSES | cos_sim_spearman | 86.9622224570873 |
STS | MTEB BIOSSES | euclidean_pearson | 86.70473853624121 |
STS | MTEB BIOSSES | euclidean_spearman | 86.9622224570873 |
STS | MTEB BIOSSES | manhattan_pearson | 86.21089380980065 |
STS | MTEB BIOSSES | manhattan_spearman | 86.75318154937008 |
BitextMining Tasks
Task | Dataset | Metrics | Value |
---|---|---|---|
BitextMining | MTEB BUCC (de-en) | accuracy | 99.65553235908142 |
BitextMining | MTEB BUCC (de-en) | f1 | 99.60681976339595 |
BitextMining | MTEB BUCC (de-en) | precision | 99.58246346555325 |
BitextMining | MTEB BUCC (de-en) | recall | 99.65553235908142 |
BitextMining | MTEB BUCC (fr-en) | accuracy | 99.26260180497468 |
BitextMining | MTEB BUCC (fr-en) | f1 | 99.14520507740848 |
BitextMining | MTEB BUCC (fr-en) | precision | 99.08650671362535 |
BitextMining | MTEB BUCC (fr-en) | recall | 99.26260180497468 |
BitextMining | MTEB BUCC (ru-en) | accuracy | 98.07412538967787 |
BitextMining | MTEB BUCC (ru-en) | f1 | 97.86629719431936 |
BitextMining | MTEB BUCC (ru-en) | precision | 97.76238309664012 |
BitextMining | MTEB BUCC (ru-en) | recall | 98.07412538967787 |
BitextMining | MTEB BUCC (zh-en) | accuracy | 99.42074776197998 |
BitextMining | MTEB BUCC (zh-en) | f1 | 99.38564156573635 |
BitextMining | MTEB BUCC (zh-en) | precision | 99.36808846761454 |
BitextMining | MTEB BUCC (zh-en) | recall | 99.42074776197998 |
📄 License
This project is licensed under the MIT license.
Phi 2 GGUF
Other
Phi-2 is a small yet powerful language model developed by Microsoft, featuring 2.7 billion parameters, focusing on efficient inference and high-quality text generation.
Large Language Model Supports Multiple Languages
P
TheBloke
41.5M
205
Roberta Large
MIT
A large English language model pre-trained with masked language modeling objectives, using improved BERT training methods
Large Language Model English
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT is a distilled version of the BERT base model, maintaining similar performance while being more lightweight and efficient, suitable for natural language processing tasks such as sequence classification and token classification.
Large Language Model English
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct is a multilingual large language model optimized for multilingual dialogue use cases, excelling in common industry benchmarks.
Large Language Model English
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
9.6M
664
Roberta Base
MIT
An English pre-trained model based on Transformer architecture, trained on massive text through masked language modeling objectives, supporting text feature extraction and downstream task fine-tuning
Large Language Model English
R
FacebookAI
9.3M
488
Opt 125m
Other
OPT is an open pre-trained Transformer language model suite released by Meta AI, with parameter sizes ranging from 125 million to 175 billion, designed to match the performance of the GPT-3 series while promoting open research in large-scale language models.
Large Language Model English
O
facebook
6.3M
198
1
A pretrained model based on the transformers library, suitable for various NLP tasks
Large Language Model
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1 is Meta's multilingual large language model series, featuring 8B, 70B, and 405B parameter scales, supporting 8 languages and code generation, with optimized multilingual dialogue scenarios.
Large Language Model
Transformers Supports Multiple Languages

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
The T5 Base Version is a text-to-text Transformer model developed by Google with 220 million parameters, supporting multilingual NLP tasks.
Large Language Model Supports Multiple Languages
T
google-t5
5.4M
702
Featured Recommended AI Models