🚀 Model Information
This document provides detailed information about a model, including its datasets, license, supported languages, programming languages, and performance metrics.
📦 Datasets
The model is trained on the following dataset:
📄 License
The model is released under the bigscience-bloom-rail-1.0
license.
🌐 Supported Languages
The model supports a wide range of languages, including:
- ak, ar, as, bm, bn, ca, code, en, es, eu, fon, fr, gu, hi, id, ig, ki, kn, lg, ln, ml, mr, ne, nso, ny, or, pa, pt, rn, rw, sn, st, sw, ta, te, tn, ts, tum, tw, ur, vi, wo, xh, yo, zh, zu
💻 Programming Languages
The model is compatible with the following programming languages:
- C, C++, C#, Go, Java, JavaScript, Lua, PHP, Python, Ruby, Rust, Scala, TypeScript
📊 Pipeline Tag
The model's pipeline tag is text-generation
.
🎨 Widget Examples
The following are some example widgets demonstrating the model's capabilities:
Example Title |
Text |
zh-en sentiment |
"A legendary beginning, an immortal myth. This is not just a movie, but a label for entering a new era, forever inscribed in the annals of history. Would you rate the previous review as positive, neutral or negative?" |
zh-zh sentiment |
"一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评?" |
vi-en query |
"Suggest at least five related search terms to "Mạng neural nhân tạo"." |
fr-fr query |
"Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels»." |
te-en qa |
"Explain in a sentence in Telugu what is backpropagation in neural networks." |
en-en qa |
"Why is the sky blue?" |
es-en fable |
"Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish):" |
hi-en fable |
"Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is "Violence is the last refuge of the incompetent". Fable (in Hindi):" |
📈 Model Performance
The bloomz-7b1
model has been evaluated on various tasks and datasets, with the following accuracy metrics:
Coreference Resolution
Dataset |
Name |
Config |
Split |
Accuracy |
winogrande |
Winogrande XL (xl) |
xl |
validation |
55.8 |
Muennighoff/xwinograd |
XWinograd (en) |
en |
test |
66.02 |
Muennighoff/xwinograd |
XWinograd (fr) |
fr |
test |
57.83 |
Muennighoff/xwinograd |
XWinograd (jp) |
jp |
test |
52.87 |
Muennighoff/xwinograd |
XWinograd (pt) |
pt |
test |
57.79 |
Muennighoff/xwinograd |
XWinograd (ru) |
ru |
test |
54.92 |
Muennighoff/xwinograd |
XWinograd (zh) |
zh |
test |
63.69 |
Natural Language Inference
Dataset |
Name |
Config |
Split |
Accuracy |
anli |
ANLI (r1) |
r1 |
validation |
42.1 |
anli |
ANLI (r2) |
r2 |
validation |
39.5 |
anli |
ANLI (r3) |
r3 |
validation |
41.0 |
super_glue |
SuperGLUE (cb) |
cb |
validation |
80.36 |
super_glue |
SuperGLUE (rte) |
rte |
validation |
84.12 |
xnli |
XNLI (ar) |
ar |
validation |
53.25 |
xnli |
XNLI (bg) |
bg |
validation |
43.61 |
xnli |
XNLI (de) |
de |
validation |
46.83 |
xnli |
XNLI (el) |
el |
validation |
41.53 |
xnli |
XNLI (en) |
en |
validation |
59.68 |
xnli |
XNLI (es) |
es |
validation |
55.1 |
xnli |
XNLI (fr) |
fr |
validation |
55.26 |
xnli |
XNLI (hi) |
hi |
validation |
50.88 |
xnli |
XNLI (ru) |
ru |
validation |
47.75 |
xnli |
XNLI (sw) |
sw |
validation |
46.63 |
xnli |
XNLI (th) |
th |
validation |
40.12 |
xnli |
XNLI (tr) |
tr |
validation |
37.55 |
xnli |
XNLI (ur) |
ur |
validation |
46.51 |
xnli |
XNLI (vi) |
vi |
validation |
52.93 |
xnli |
XNLI (zh) |
zh |
validation |
53.61 |
Program Synthesis
Dataset |
Name |
Config |
Split |
Pass@1 |
Pass@10 |
Pass@100 |
openai_humaneval |
HumanEval |
None |
test |
8.06 |
15.03 |
27.49 |
Sentence Completion
Dataset |
Name |
Config |
Split |
Accuracy |
story_cloze |
StoryCloze (2016) |
2016 |
validation |
90.43 |
super_glue |
SuperGLUE (copa) |
copa |
validation |
86.0 |
xcopa |
XCOPA (et) |
et |
validation |
50.0 |
xcopa |
XCOPA (ht) |
ht |
validation |
54.0 |
xcopa |
XCOPA (id) |
id |
validation |
76.0 |
xcopa |
XCOPA (it) |
it |
validation |
61.0 |
xcopa |
XCOPA (qu) |
qu |
validation |
60.0 |
xcopa |
XCOPA (sw) |
sw |
validation |
63.0 |
xcopa |
XCOPA (ta) |
ta |
validation |
64.0 |
xcopa |
XCOPA (th) |
th |
validation |
57.0 |
xcopa |
XCOPA (tr) |
tr |
validation |
53.0 |
xcopa |
XCOPA (vi) |
vi |
validation |
79.0 |
xcopa |
XCOPA (zh) |
zh |
validation |
81.0 |
Muennighoff/xstory_cloze |
XStoryCloze (ar) |
ar |
validation |
83.26 |
Muennighoff/xstory_cloze |
XStoryCloze (es) |
es |
validation |
88.95 |
Muennighoff/xstory_cloze |
XStoryCloze (eu) |
eu |
validation |
[Value missing in original] |