🚀 internlm-chatbode-7b
InternLm-ChatBode is a language model fine - tuned for the Portuguese language, developed based on the InternLM2 model. This model was refined through the fine - tuning process using the UltraAlpaca dataset.
🚀 Quick Start
InternLm-ChatBode is a language model fine - tuned for Portuguese, developed from the InternLM2 model. It was refined through fine - tuning with the UltraAlpaca dataset.
✨ Features
- Base Model: internlm/internlm2-chat-7b
- Fine - tuning Dataset: UltraAlpaca
- Training: The training was carried out by fine - tuning the internlm2 - chat - 7b using QLoRA.
💻 Usage Examples
Basic Usage
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("recogna-nlp/internlm-chatbode-7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("recogna-nlp/internlm-chatbode-7b", torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "Olá", history=[])
print(response)
response, history = model.chat(tokenizer, "O que é o Teorema de Pitágoras? Me dê um exemplo", history=history)
print(response)
Advanced Usage
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "recogna-nlp/internlm-chatbode-7b"
model = AutoModelForCausalLM.from_pretrained(model_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = model.eval()
length = 0
for response, history in model.stream_chat(tokenizer, "Olá", history=[]):
print(response[length:], flush=True, end="")
length = len(response)
📚 Documentation
Open Portuguese LLM Leaderboard Evaluation Results
Detailed results can be found here and on the üöÄ Open Portuguese LLM Leaderboard
Property |
Details |
Average |
69.54 |
ENEM Challenge (No Images) |
63.05 |
BLUEX (No Images) |
51.46 |
OAB Exams |
42.32 |
Assin2 RTE |
91.33 |
Assin2 STS |
80.69 |
FaQuAD NLI |
79.80 |
HateBR Binary |
87.99 |
PT Hate Speech Binary |
68.09 |
tweetSentBR |
61.11 |
📄 License
If you want to use Chatbode in your research, cite it as follows:
@misc {chatbode_2024,
author = { Gabriel Lino Garcia, Pedro Henrique Paiola and and João Paulo Papa},
title = { Chatbode },
year = {2024},
url = { https://huggingface.co/recogna-nlp/internlm-chatbode-7b/ },
doi = { 10.57967/hf/3317 },
publisher = { Hugging Face }
}