đ Carrot Llama-3.2 Rabbit Ko
Carrot Llama-3.2 Rabbit Ko is an instruction-tuned large language model supporting Korean and English, offering high - quality text generation capabilities.

đ Quick Start
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")
tokenizer = AutoTokenizer.from_pretrained("CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct")
⨠Features
Model Details
- Name: Carrot Llama-3.2 Rabbit Ko
- Version: 3B Instruct
- Base Model: CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct
- Languages: Korean, English
- Model Type: Large Language Model (Instruction-tuned)
Training Process
This model has gone through the following main training steps:
- SFT (Supervised Fine-Tuning)
- Fine-tuned the base model using high-quality Korean and English datasets.
Limitations
- Limited performance in complex tasks due to the 3B parameter scale.
- Lack of in-depth expertise in specific domains.
- Possibility of bias and hallucination.
Ethics Statement
Although ethical considerations have been maximally reflected in the model development process, users should always critically review the results.
đ Documentation
Model Information Table
Property |
Details |
License |
llama3.2 |
Training Datasets |
CarrotAI/Magpie-Ko-Pro-AIR, CarrotAI/Carrot, CarrotAI/ko-instruction-dataset |
Languages |
Korean, English |
Base Model |
meta-llama/Llama-3.2-3B-Instruct |
Pipeline Tag |
text-generation |
New Version |
CarrotAI/Llama-3.2-Rabbit-Ko-3B-Instruct-2412 |
Score
Tasks |
Version |
Filter |
n-shot |
Metric |
Value |
Stderr |
gsm8k |
3 |
flexible-extract |
5 |
exact_match |
0.6490 |
0.0131 |
|
|
strict-match |
5 |
exact_match |
0.0023 |
0.0013 |
gsm8k-ko |
3 |
flexible-extract |
5 |
exact_match |
0.3275 |
0.0134 |
|
|
strict-match |
5 |
exact_match |
0.2737 |
0.0134 |
ifeval |
4 |
none |
5 |
inst_level_loose_acc |
0.8058 |
N/A |
|
|
none |
5 |
inst_level_strict_acc |
0.7686 |
N/A |
|
|
none |
5 |
prompt_level_loose_acc |
0.7320 |
0.0191 |
|
|
none |
5 |
prompt_level_strict_acc |
0.6858 |
0.0200 |
Tasks |
Version |
Filter |
n-shot |
Metric |
Value |
Stderr |
haerae |
1 |
none |
|
acc |
0.4180 |
0.0148 |
|
|
none |
|
acc_norm |
0.4180 |
0.0148 |
- haerae_general_knowledge |
1 |
none |
5 |
acc |
0.3125 |
0.0350 |
|
|
none |
5 |
acc_norm |
0.3125 |
0.0350 |
- haerae_history |
1 |
none |
5 |
acc |
0.3404 |
0.0347 |
|
|
none |
5 |
acc_norm |
0.3404 |
0.0347 |
- haerae_loan_word |
1 |
none |
5 |
acc |
0.4083 |
0.0379 |
|
|
none |
5 |
acc_norm |
0.4083 |
0.0379 |
- haerae_rare_word |
1 |
none |
5 |
acc |
0.4815 |
0.0249 |
|
|
none |
5 |
acc_norm |
0.4815 |
0.0249 |
- haerae_standard_nomenclature |
1 |
none |
5 |
acc |
0.4771 |
0.0405 |
|
|
none |
5 |
acc_norm |
0.4771 |
0.0405 |
Tasks |
Version |
Filter |
n-shot |
Metric |
Value |
Stderr |
kobest_boolq |
1 |
none |
5 |
acc |
0.7664 |
0.0113 |
|
|
none |
5 |
f1 |
0.7662 |
N/A |
kobest_copa |
1 |
none |
5 |
acc |
0.5620 |
0.0157 |
|
|
none |
5 |
f1 |
0.5612 |
N/A |
kobest_hellaswag |
1 |
none |
5 |
acc |
0.3840 |
0.0218 |
|
|
none |
5 |
acc_norm |
0.4900 |
0.0224 |
|
|
none |
5 |
f1 |
0.3807 |
N/A |
kobest_sentineg |
1 |
none |
5 |
acc |
0.5869 |
0.0247 |
|
|
none |
5 |
f1 |
0.5545 |
N/A |
kobest_wic |
1 |
none |
5 |
acc |
0.4952 |
0.0141 |
|
|
none |
5 |
f1 |
0.4000 |
N/A |
đ License
The license of this model is llama3.2.