🚀 Markobes/mt0-xxl-mt-Q4_K_M-GGUF
This model is a conversion to the GGUF format from the original model bigscience/mt0-xxl-mt
. The conversion was carried out using llama.cpp via the ggml.ai's GGUF-my-repo space. For more in - depth details about the model, please refer to the original model card.
🚀 Quick Start
Use with llama.cpp
You can install llama.cpp using brew, which is applicable to both Mac and Linux systems.
brew install llama.cpp
After installation, you can either use the llama.cpp server or the CLI.
CLI
llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
Server
llama-server --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -c 2048
Note: You can also follow the usage steps in the Llama.cpp repository to use this checkpoint directly.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Navigate to the llama.cpp folder and build it with the LLAMA_CURL = 1
flag, along with other hardware - specific flags (e.g., LLAMA_CUDA = 1
for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference using the main binary.
./llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
Or
./llama-server --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -c 2048
✨ Features
- Multi - language support: The model supports a wide range of languages, including but not limited to Afrikaans (
af
), Amharic (am
), Arabic (ar
), and many others.
- Text - to - text generation: It is designed for text - to - text generation tasks, such as translation, question - answering, and more.
📦 Installation
Install llama.cpp
brew install llama.cpp
💻 Usage Examples
Basic Usage
CLI
llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
Server
llama-server --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -c 2048
Advanced Usage
If you want to use hardware - specific optimizations, for example, using Nvidia GPUs on Linux, you can build llama.cpp with the LLAMA_CUDA = 1
flag.
cd llama.cpp && LLAMA_CUDA=1 LLAMA_CURL=1 make
Then run inference:
./llama-cli --hf-repo Markobes/mt0-xxl-mt-Q4_K_M-GGUF --hf-file mt0-xxl-mt-q4_k_m.gguf -p "The meaning to life and the universe is"
📚 Documentation
Datasets
Property |
Details |
Datasets |
bigscience/xP3mt , mc4 |
Languages
The model supports the following languages:
af
, am
, ar
, az
, be
, bg
, bn
, ca
, ceb
, co
, cs
, cy
, da
, de
, el
, en
, eo
, es
, et
, eu
, fa
, fi
, fil
, fr
, fy
, ga
, gd
, gl
, gu
, ha
, haw
, hi
, hmn
, ht
, hu
, hy
, ig
, is
, it
, iw
, ja
, jv
, ka
, kk
, km
, kn
, ko
, ku
, ky
, la
, lb
, lo
, lt
, lv
, mg
, mi
, mk
, ml
, mn
, mr
, ms
, mt
, my
, ne
, nl
, no
, ny
, pa
, pl
, ps
, pt
, ro
, ru
, sd
, si
, sk
, sl
, sm
, sn
, so
, sq
, sr
, st
, su
, sv
, sw
, ta
, te
, tg
, th
, tr
, uk
, und
, ur
, uz
, vi
, xh
, yi
, yo
, zh
, zu
Tags
text2text-generation
llama-cpp
gguf-my-repo
Widget Examples
Text |
Example Title |
Life is beautiful! Translate to Mongolian. |
mn - en translation |
Le mot japonais «憂鬱» veut dire quoi en Odia? |
jp - or - fr translation |
Stell mir eine schwierige Quiz Frage bei der es um Astronomie geht. Bitte stell die Frage auf Norwegisch. |
de - nb quiz |
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。Would you rate the previous review as positive, neutral or negative? |
zh - en sentiment |
一个传奇的开端,一个不灭的神话,这不仅仅是一部电影,而是作为一个走进新时代的标签,永远彪炳史册。你认为这句话的立场是赞扬、中立还是批评? |
zh - zh sentiment |
Suggest at least five related search terms to "Mạng neural nhân tạo". |
vi - en query |
Proposez au moins cinq mots clés concernant «Réseau de neurones artificiels». |
fr - fr query |
Explain in a sentence in Telugu what is backpropagation in neural networks. |
te - en qa |
Why is the sky blue? |
en - en qa |
Write a fairy tale about a troll saving a princess from a dangerous dragon. The fairy tale is a masterpiece that has achieved praise worldwide and its moral is "Heroes Come in All Shapes and Sizes". Story (in Spanish): |
es - en fable |
Write a fable about wood elves living in a forest that is suddenly invaded by ogres. The fable is a masterpiece that has achieved praise worldwide and its moral is "Violence is the last refuge of the incompetent". Fable (in Hindi): |
hi - en fable |
Model Index
The model mt0-xxl-mt
has the following results on different tasks and datasets:
Task |
Dataset |
Metrics |
Coreference resolution |
Winogrande XL (xl) |
Accuracy: 62.67 |
Coreference resolution |
XWinograd (en) |
Accuracy: 83.31 |
Coreference resolution |
XWinograd (fr) |
Accuracy: 78.31 |
Coreference resolution |
XWinograd (jp) |
Accuracy: 80.19 |
Coreference resolution |
XWinograd (pt) |
Accuracy: 80.99 |
Coreference resolution |
XWinograd (ru) |
Accuracy: 79.05 |
Coreference resolution |
XWinograd (zh) |
Accuracy: 82.34 |
Natural language inference |
ANLI (r1) |
Accuracy: 49.5 |
Natural language inference |
ANLI (r2) |
Accuracy: 42 |
Natural language inference |
ANLI (r3) |
Accuracy: 48.17 |
Natural language inference |
SuperGLUE (cb) |
Accuracy: 87.5 |
Natural language inference |
SuperGLUE (rte) |
Accuracy: 84.84 |
Natural language inference |
XNLI (ar) |
Accuracy: 58.03 |
Natural language inference |
XNLI (bg) |
Accuracy: 59.92 |
Natural language inference |
XNLI (de) |
Accuracy: 60.16 |
Natural language inference |
XNLI (el) |
Accuracy: 59.2 |
Natural language inference |
XNLI (en) |
Accuracy: 62.25 |
Natural language inference |
XNLI (es) |
Accuracy: 60.92 |
Natural language inference |
XNLI (fr) |
Accuracy: 59.88 |
Natural language inference |
XNLI (hi) |
Accuracy: 57.47 |
Natural language inference |
XNLI (ru) |
Accuracy: 58.67 |
Natural language inference |
XNLI (sw) |
Accuracy: 56.79 |
Natural language inference |
XNLI (th) |
Accuracy: 58.03 |
Natural language inference |
XNLI (tr) |
Accuracy: 57.67 |
Natural language inference |
XNLI (ur) |
Accuracy: 55.98 |
Natural language inference |
XNLI (vi) |
Accuracy: 58.92 |
Natural language inference |
XNLI (zh) |
Accuracy: 58.71 |
Sentence completion |
StoryCloze (2016) |
Accuracy: 94.66 |
Sentence completion |
SuperGLUE (copa) |
Accuracy: 88 |
Sentence completion |
XCOPA (et) |
Accuracy: 81 |
Sentence completion |
XCOPA (ht) |
Accuracy: 79 |
Sentence completion |
XCOPA (id) |
Accuracy: 90 |
Sentence completion |
XCOPA (it) |
Accuracy: 88 |
Sentence completion |
XCOPA (qu) |
Accuracy: 56 |
Sentence completion |
XCOPA (sw) |
Accuracy: 81 |
Sentence completion |
XCOPA (ta) |
Accuracy: 81 |
Sentence completion |
XCOPA (th) |
Accuracy: 76 |
Sentence completion |
XCOPA (tr) |
Accuracy: 76 |
Sentence completion |
XCOPA (vi) |
Accuracy: 85 |
Sentence completion |
XCOPA (zh) |
Accuracy: 87 |
Sentence completion |
XStoryCloze (ar) |
Accuracy: 91 |
Sentence completion |
XStoryCloze (es) |
Accuracy: 93.38 |
Sentence completion |
XStoryCloze (eu) |
Accuracy: 91.13 |
Sentence completion |
XStoryCloze (hi) |
Accuracy: 90.73 |
Sentence completion |
XStoryCloze (id) |
Accuracy: 93.05 |
Sentence completion |
XStoryCloze (my) |
Accuracy: 86.7 |
Sentence completion |
XStoryCloze (ru) |
Accuracy: 91.66 |
Sentence completion |
XStoryCloze (sw) |
Accuracy: 89.61 |
Sentence completion |
XStoryCloze (te) |
Accuracy: 90.4 |
Sentence completion |
XStoryCloze (zh) |
Accuracy: 93.05 |
📄 License
The model is released under the apache - 2.0
license.