Honyaku 7b V2
Honyaku-7b-v2 is an improved version of its predecessor, with enhanced accuracy in multilingual tag adherence.
Downloads 17
Release Time : 4/7/2024
Model Overview
Honyaku-7b-v2 is a multilingual translation model, primarily used for translating texts ranging from 500 to several thousand tokens, with a particular strength in translating into Japanese.
Model Features
Enhanced Multilingual Generation Accuracy
Improved adherence accuracy in multilingual tag generation compared to the previous version.
Japanese Translation Stability
Most stable for Japanese translations, leveraging the base model's characteristics.
Long Text Support
Fine-tuned for up to 8k tokens, but due to the base model's characteristics, supports a maximum of 4k tokens including prompts.
Model Capabilities
Text Translation
Multilingual Support
Long Text Processing
Use Cases
Language Translation
Document Translation
Translate technical documents or business files into target languages.
Provides relatively accurate translations, especially suitable for Japanese translations.
Content Localization
Assist content creators in localizing their works into multiple languages.
Supports translation needs for over 100 languages.
đ Honyaku-7b-v2
Honyaku-7b-v2 is an improved model that offers enhanced accuracy in multilingual generation, with translation quality reflecting the pre - training of the base model.
đ Quick Start
Honyaku-7b-v2 is an improved version of its predecessor. This model exhibits enhanced accuracy in adhering to multilingual generation tags compared to the previous version.
⨠Features
- Improved Multilingual Generation Accuracy: The model has increased precision in following multilingual generation tags.
- Quality - Reflective Translation: The translation quality of Honyaku - 7b is strongly influenced by the pre - training of the base model. Consequently, the quality of translation varies in proportion to the training volume of the original language model.
- Translation Scope: The primary purpose is to translate about 500 to several thousand tokens. Due to the characteristics of the Base model, translation into Japanese is the most stable.
- Token Support: It has been fine - tuned up to 8k tokens, but based on the Base model's characteristics, it supports up to 4k tokens including the prompt.
â ī¸ Important Note
- In minor languages, translation does not function well.
- The translation function of 7b - level large language models (LLM) often contains errors.
- Do not use unchecked text for social communication.
đģ Usage Examples
Basic Usage
# Honyaku-7b-webui
import gradio as gr
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
from threading import Thread
# Language list
languages = [
"English", "Chinese (Simplified)", "Chinese (Traditional)", "Spanish", "Arabic", "Hindi",
"Bengali", "Portuguese", "Russian", "Japanese", "German", "French", "Urdu", "Indonesian",
"Italian", "Turkish", "Korean", "Vietnamese", "Tamil", "Marathi", "Telugu", "Persian",
"Polish", "Dutch", "Thai", "Gujarati", "Romanian", "Ukrainian", "Malay", "Kannada", "Oriya (Odia)",
"Burmese (Myanmar)", "Azerbaijani", "Uzbek", "Kurdish (Kurmanji)", "Swedish", "Filipino (Tagalog)",
"Serbian", "Czech", "Hungarian", "Greek", "Belarusian", "Bulgarian", "Hebrew", "Finnish",
"Slovak", "Norwegian", "Danish", "Sinhala", "Croatian", "Lithuanian", "Slovenian", "Latvian",
"Estonian", "Armenian", "Malayalam", "Georgian", "Mongolian", "Afrikaans", "Nepali", "Pashto",
"Punjabi", "Kurdish", "Kyrgyz", "Somali", "Albanian", "Icelandic", "Basque", "Luxembourgish",
"Macedonian", "Maltese", "Hawaiian", "Yoruba", "Maori", "Zulu", "Welsh", "Swahili", "Haitian Creole",
"Lao", "Amharic", "Khmer", "Javanese", "Kazakh", "Malagasy", "Sindhi", "Sundanese", "Tajik", "Xhosa",
"Yiddish", "Bosnian", "Cebuano", "Chichewa", "Corsican", "Esperanto", "Frisian", "Galician", "Hausa",
"Hmong", "Igbo", "Irish", "Kinyarwanda", "Latin", "Samoan", "Scots Gaelic", "Sesotho", "Shona",
"Sotho", "Swedish", "Uyghur"
]
tokenizer = AutoTokenizer.from_pretrained("aixsatoshi/Honyaku-7b-v2")
model = AutoModelForCausalLM.from_pretrained("aixsatoshi/Honyaku-7b-v2", torch_dtype=torch.float16)
model = model.to('cuda:0')
class StopOnTokens(StoppingCriteria):
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
stop_ids = [2]
for stop_id in stop_ids:
if input_ids[0][-1] == stop_id:
return True
return False
def predict(message, history, tokens, temperature, language):
tag = "<" + language.lower() + ">"
history_transformer_format = history + [[message, ""]]
stop = StopOnTokens()
messages = "".join(["".join(["\n<english>:"+item[0]+"</english>\n", tag+item[1]])
for item in history_transformer_format])
model_inputs = tokenizer([messages], return_tensors="pt").to("cuda")
streamer = TextIteratorStreamer(tokenizer, timeout=10., skip_prompt=True, skip_special_tokens=True)
generate_kwargs = dict(
model_inputs,
streamer=streamer,
max_new_tokens=int(tokens),
temperature=float(temperature),
do_sample=True,
top_p=0.95,
top_k=20,
repetition_penalty=1.15,
num_beams=1,
stopping_criteria=StoppingCriteriaList([stop])
)
t = Thread(target=model.generate, kwargs=generate_kwargs)
t.start()
partial_message = ""
for new_token in streamer:
if new_token != '<':
partial_message += new_token
yield partial_message
# Gradio interface settings
demo = gr.ChatInterface(
fn=predict,
title="Honyaku-7b webui",
description="Translate using Honyaku-7b model",
additional_inputs=[
gr.Slider(100, 4096, value=1000, label="Tokens"),
gr.Slider(0.0, 1.0, value=0.3, label="Temperature"),
gr.Dropdown(choices=languages, value="Japanese", label="Language")
]
)
demo.queue().launch()
Advanced Usage
# Textstreamer
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer
model_name = "aixsatoshi/Honyaku-7b-v2"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Define the streamer
streamer = TextStreamer(tokenizer)
# Define the English prompt
english_prompt = """
Machine translation accuracy varies greatly across languages.
Key challenges include context understanding, idiomatic expressions, and syntactic differences.
Advanced models leverage AI to enhance translation quality, focusing on nuances and cultural relevance.
To address these challenges, developers employ neural networks and deep learning techniques, which adapt to linguistic variations and learn from vast amounts of text.
This approach helps in capturing the essence of languages and accurately translating complex sentences.
Furthermore, user feedback plays a crucial role in refining translation algorithms.
By analyzing corrections and suggestions, machine translation systems can evolve and handle nuanced expressions more effectively.
This iterative process ensures continuous improvement, making translations more reliable and understandable for a global audience.
"""
# Prepare the prompt for English to Japanese translation
prompt = f"<english>: {english_prompt} </english>\n\n<japanese>:"
# Tokenize the input text and move to CUDA device
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
# Generate the output using the model and streamer
output = model.generate(**inputs, max_new_tokens=4096, do_sample=True, top_k=20, top_p=0.95, streamer=streamer)
# Gradio non - streaming generation
import gradio as gr
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
languages = [
"English", "Chinese (Simplified)", "Chinese (Traditional)", "Spanish", "Arabic", "Hindi",
"Bengali", "Portuguese", "Russian", "Japanese", "German", "French", "Urdu", "Indonesian",
"Italian", "Turkish", "Korean", "Vietnamese", "Tamil", "Marathi", "Telugu", "Persian",
"Polish", "Dutch", "Thai", "Gujarati", "Romanian", "Ukrainian", "Malay", "Kannada", "Oriya (Odia)",
"Burmese (Myanmar)", "Azerbaijani", "Uzbek", "Kurdish (Kurmanji)", "Swedish", "Filipino (Tagalog)",
"Serbian", "Czech", "Hungarian", "Greek", "Belarusian", "Bulgarian", "Hebrew", "Finnish",
"Slovak", "Norwegian", "Danish", "Sinhala", "Croatian", "Lithuanian", "Slovenian", "Latvian",
"Estonian", "Armenian", "Malayalam", "Georgian", "Mongolian", "Afrikaans", "Nepali", "Pashto",
"Punjabi", "Kurdish", "Kyrgyz", "Somali", "Albanian", "Icelandic", "Basque", "Luxembourgish",
"Macedonian", "Maltese", "Hawaiian", "Yoruba", "Maori", "Zulu", "Welsh", "Swahili", "Haitian Creole",
"Lao", "Amharic", "Khmer", "Javanese", "Kazakh", "Malagasy", "Sindhi", "Sundanese", "Tajik", "Xhosa",
"Yiddish", "Bosnian", "Cebuano", "Chichewa", "Corsican", "Esperanto", "Frisian", "Galician", "Hausa",
"Hmong", "Igbo", "Irish", "Kinyarwanda", "Latin", "Samoan", "Scots Gaelic", "Sesotho", "Shona",
"Sotho", "Swedish", "Uyghur"
]
tokenizer = AutoTokenizer.from_pretrained("aixsatoshi/Honyaku-7b-v2")
model = AutoModelForCausalLM.from_pretrained("aixsatoshi/Honyaku-7b-v2", torch_dtype=torch.float16)
model = model.to('cuda:0')
def predict(message, tokens, temperature, language):
tag = "<" + language.lower() + ">"
messages = "\n<english>:" + message + "</english>\n" + tag
model_inputs = tokenizer([messages], return_tensors="pt").to("cuda")
output = model.generate(
**model_inputs,
max_new_tokens=int(tokens),
temperature=float(temperature),
do_sample=True,
top_p=0.95,
top_k=20,
repetition_penalty=1.15,
num_beams=1,
eos_token_id=tokenizer.eos_token_id
)
translation = tokenizer.decode(output[0], skip_special_tokens=True)
return translation
# Gradio interface settings
inputs = [
gr.Textbox(label="Message", lines=20),
gr.Slider(100, 4096, value=1000, label="Tokens"),
gr.Slider(0.0, 1.0, value=0.3, label="Temperature"),
gr.Dropdown(choices=languages, value="Japanese", label="Language")
]
output = gr.Textbox(label="Translation", lines=35)
demo = gr.Interface(
fn=predict,
inputs=inputs,
outputs=output,
title="Honyaku-7b webui",
description="Translate using Honyaku-7b model",
live=False, # Explicitly click the button to perform translation
allow_flagging=False
)
demo.launch()
đ Documentation
Base Model
tokyotech-llm/Swallow-MS-7b-v0.1
đ License
The project is licensed under the Apache 2.0 license.
M2m100 418M
MIT
M2M100 is a multilingual encoder-decoder model supporting 9,900 translation directions across 100 languages
Machine Translation Supports Multiple Languages
M
facebook
1.6M
299
Opus Mt Fr En
Apache-2.0
A Transformer-based French-to-English neural machine translation model developed by the Helsinki-NLP team, trained on the OPUS multilingual dataset.
Machine Translation Supports Multiple Languages
O
Helsinki-NLP
1.2M
44
Opus Mt Ar En
Apache-2.0
Arabic-to-English machine translation model trained on OPUS data, using transformer-align architecture
Machine Translation Supports Multiple Languages
O
Helsinki-NLP
579.41k
42
M2m100 1.2B
MIT
M2M100 is a multilingual machine translation model supporting 100 languages, capable of direct translation across 9900 translation directions.
Machine Translation Supports Multiple Languages
M
facebook
501.82k
167
Indictrans2 Indic En 1B
MIT
A 1.1B-parameter machine translation model supporting mutual translation between 25 Indian languages and English, developed by the AI4Bharat project
Machine Translation
Transformers Supports Multiple Languages

I
ai4bharat
473.63k
14
Opus Mt En Zh
Apache-2.0
A Transformer-based English-to-Multidialectal Chinese translation model supporting translation tasks from English to 13 Chinese variants
Machine Translation Supports Multiple Languages
O
Helsinki-NLP
442.08k
367
Opus Mt Zh En
A Chinese-to-English machine translation model developed by the University of Helsinki, based on the OPUS corpus
Machine Translation Supports Multiple Languages
O
Helsinki-NLP
441.24k
505
Mbart Large 50 Many To Many Mmt
A multilingual machine translation model fine-tuned based on mBART-large-50, supporting translation between 50 languages
Machine Translation Supports Multiple Languages
M
facebook
404.66k
357
Opus Mt De En
Apache-2.0
opus-mt-de-en is a German-to-English machine translation model based on the transformer-align architecture, developed by the Helsinki-NLP team.
Machine Translation Supports Multiple Languages
O
Helsinki-NLP
404.33k
44
Opus Mt Es En
Apache-2.0
This is a machine translation model from Spanish to English based on the Transformer architecture, developed by the Helsinki-NLP team.
Machine Translation
Transformers Supports Multiple Languages

O
Helsinki-NLP
385.40k
71
Featured Recommended AI Models
Š 2025AIbase