🚀 Gugugo-koen-7B-V1.1
Gugugo-koen-7B-V1.1是一個專注於韓語和英語翻譯任務的模型,基於Llama-2-ko-7b基礎模型,使用特定數據集訓練而來,支持多種量化格式。
🚀 快速開始
詳細倉庫:https://github.com/jwj7140/Gugugo

✨ 主要特性
📚 詳細文檔
提示模板
韓語到英語
### 한국어: {sentence}</끝>
### 영어:
英語到韓語
### 영어: {sentence}</끝>
### 한국어:
💻 使用示例
基礎用法
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList
import torch
repo = "squarelike/Gugugo-koen-7B-V1.1"
model = AutoModelForCausalLM.from_pretrained(
repo,
load_in_4bit=True,
device_map='auto'
)
tokenizer = AutoTokenizer.from_pretrained(repo)
class StoppingCriteriaSub(StoppingCriteria):
def __init__(self, stops = [], encounters=1):
super().__init__()
self.stops = [stop for stop in stops]
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor):
for stop in self.stops:
if torch.all((stop == input_ids[0][-len(stop):])).item():
return True
return False
stop_words_ids = torch.tensor([[829, 45107, 29958], [1533, 45107, 29958], [829, 45107, 29958], [21106, 45107, 29958]]).to("cuda")
stopping_criteria = StoppingCriteriaList([StoppingCriteriaSub(stops=stop_words_ids)])
def gen(lan="en", x=""):
if (lan == "ko"):
prompt = f"### 한국어: {x}</끝>\n### 영어:"
else:
prompt = f"### 영어: {x}</끝>\n### 한국어:"
gened = model.generate(
**tokenizer(
prompt,
return_tensors='pt',
return_token_type_ids=False
).to("cuda"),
max_new_tokens=2000,
temperature=0.3,
num_beams=5,
stopping_criteria=stopping_criteria
)
return tokenizer.decode(gened[0][1:]).replace(prompt+" ", "").replace("</끝>", "")
print(gen(lan="en", x="Hello, world!"))
📄 許可證
本模型遵循Apache-2.0許可證。