🚀 gte_tiny
gte_tiny
モデルは、様々な自然言語処理タスクにおいて高い性能を発揮するモデルです。以下に、各タスクとデータセットにおける評価指標の結果を示します。
📚 詳細ドキュメント
分類タスク (Classification)
データセット |
正解率 (accuracy) |
平均適合率 (ap) |
F1値 (f1) |
MTEB AmazonCounterfactualClassification (en) |
71.76119402985076 |
34.63659287952359 |
65.88939512571113 |
MTEB AmazonPolarityClassification |
86.61324999999998 |
81.7476302802319 |
86.5863470912001 |
MTEB AmazonReviewsClassification (en) |
42.61000000000001 |
- |
42.2217180000715 |
MTEB Banking77Classification |
81.73051948051948 |
- |
81.66368364988331 |
検索タスク (Retrieval)
データセット |
map_at_1 |
map_at_10 |
map_at_100 |
map_at_1000 |
mrr_at_1 |
mrr_at_10 |
mrr_at_100 |
mrr_at_1000 |
ndcg_at_1 |
ndcg_at_10 |
ndcg_at_100 |
ndcg_at_1000 |
precision_at_1 |
precision_at_10 |
precision_at_100 |
precision_at_1000 |
recall_at_1 |
recall_at_10 |
recall_at_100 |
recall_at_1000 |
MTEB ArguAna |
28.377999999999997 |
44.565 |
45.48 |
45.487 |
29.445 |
44.956 |
45.877 |
45.884 |
28.377999999999997 |
53.638 |
57.354000000000006 |
57.513000000000005 |
28.377999999999997 |
8.272 |
0.984 |
0.1 |
28.377999999999997 |
82.717 |
98.43499999999999 |
99.644 |
MTEB CQADupstackAndroidRetrieval |
29.160000000000004 |
40.474 |
41.905 |
42.041000000000004 |
36.91 |
46.495999999999995 |
47.288000000000004 |
47.339999999999996 |
36.91 |
46.722 |
51.969 |
54.232 |
36.91 |
9.013 |
1.455 |
0.193 |
29.160000000000004 |
58.521 |
80.323 |
95.13000000000001 |
MTEB CQADupstackEnglishRetrieval |
27.750000000000004 |
36.39 |
37.5 |
37.625 |
34.14 |
41.841 |
42.469 |
42.521 |
34.14 |
41.409 |
45.668 |
47.916 |
34.14 |
7.739 |
1.2630000000000001 |
0.173 |
27.750000000000004 |
49.882 |
68.556 |
83.186 |
MTEB CQADupstackGamingRetrieval |
36.879 |
48.878 |
49.918 |
49.978 |
42.696 |
52.342 |
53.044000000000004 |
53.077 |
42.696 |
54.469 |
58.664 |
59.951 |
42.696 |
8.734 |
1.1769999999999998 |
0.133 |
36.879 |
67.669 |
85.822 |
95.092 |
MTEB CQADupstackGisRetrieval |
22.942 |
31.741999999999997 |
32.721000000000004 |
32.809 |
24.746000000000002 |
33.517 |
34.451 |
34.522000000000006 |
24.746000000000002 |
36.553000000000004 |
41.53 |
43.811 |
24.746000000000002 |
5.684 |
0.859 |
0.109 |
22.942 |
49.58 |
72.614 |
89.89200000000001 |
MTEB CQADupstackMathematicaRetrieval |
15.345 |
22.428 |
23.756 |
23.872 |
19.279 |
27.1 |
28.211000000000002 |
28.279 |
19.279 |
27.36 |
33.499 |
36.452 |
19.279 |
5.149 |
0.938 |
0.133 |
15.345 |
37.974999999999994 |
64.472 |
85.97200000000001 |
MTEB CQADupstackPhysicsRetrieval |
26.362000000000002 |
36.406 |
37.726 |
37.84 |
32.146 |
41.674 |
42.478 |
42.524 |
32.146 |
42.374 |
47.919 |
50.013 |
32.146 |
7.767 |
1.236 |
0.16 |
26.362000000000002 |
54.98800000000001 |
78.50200000000001 |
92.146 |
MTEB CQADupstackProgrammersRetrieval |
24.417 |
33.161 |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
- |
クラスタリングタスク (Clustering)
データセット |
V-measure |
MTEB ArxivClusteringP2P |
46.637318326729876 |
MTEB ArxivClusteringS2S |
36.01134479855804 |
MTEB BiorxivClusteringP2P |
39.18623707448217 |
MTEB BiorxivClusteringS2S |
32.12697757150375 |
再ランキングタスク (Reranking)
データセット |
map |
mrr |
MTEB AskUbuntuDupQuestions |
59.82917555338909 |
74.7888361254012 |
意味的テキスト類似度タスク (STS)
データセット |
cos_sim_pearson |
cos_sim_spearman |
euclidean_pearson |
euclidean_spearman |
manhattan_pearson |
manhattan_spearman |
MTEB BIOSSES |
87.1657730995964 |
86.62787748941281 |
85.48127914481798 |
86.48148861167424 |
85.07496934780823 |
86.39473964708843 |