Model Overview
Model Features
Model Capabilities
Use Cases
🚀 WiroAI/wiroai-turkish-llm-9b
A robust language model with enhanced support for Turkish language and culture.
🚀 Quick Start
This README provides an in - depth look at the WiroAI/wiroai - turkish - llm - 9b model, including its features, technical specifications, use cases, and more.
✨ Features
- Fine - tuned with over 500,000 high - quality Turkish instructions.
- The LoRA method was employed for fine - tuning without quantization.
- Adapted to Turkish culture and local context.
- Built on Google's cutting - edge Gemma architecture.
Model Details
The model is the Turkish - speaking member of Google's innovative Gemma model family. It has been trained using Supervised Fine - Tuning (SFT) on carefully curated high - quality Turkish instructions, demonstrating superior performance in Turkish language processing tasks.
📦 Installation
No installation steps are provided in the original README, so this section is skipped.
💻 Usage Examples
Basic Usage
import transformers
import torch
model_id = "WiroAI/wiroai-turkish-llm-9b"
pipeline = transformers.pipeline(
"text-generation",
model=model_id,
model_kwargs={"torch_dtype": torch.bfloat16},
device_map="auto",
)
pipeline.model.eval()
instruction = "Bana İstanbul ile alakalı bir sosyal medya postu hazırlar mısın?"
messages = [
{"role": "user", "content": f"{instruction}"}
]
prompt = pipeline.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
terminators = [
pipeline.tokenizer.eos_token_id,
pipeline.tokenizer.convert_tokens_to_ids("<end_of_turn>")
]
outputs = pipeline(
prompt,
max_new_tokens=512,
eos_token_id=terminators,
do_sample=True,
temperature=0.9,
)
print(outputs[0]["generated_text"][len(prompt):])
Advanced Usage
İstanbul'un büyüsüne kapılın! :city_sunset:
Halk arasında "dünyanın masalı şehri" olarak bilinen İstanbul, her köşesinde tarih, kültür ve modern yaşamın bir araya geldiği eşsiz bir şehir.
Yüzyıllardır farklı medeniyetlerin izlerini taşıyan İstanbul, tarihi mekanlarından, müzelerinden, çarşılarından ve restoranlarından oluşan zengin kültürel mirasa sahiptir.
Boğaz'ın eşsiz manzarasında tekne turu yapmak, Topkapı Sarayı'nı ziyaret etmek, Grand Bazaar'da alışveriş yapmak, Mısır Çarşısı'nın canlı atmosferinde kaybolmak, Galata Kulesi'nden muhteşem bir manzara deneyimlemek veya Beyoğlu'nun hareketli sokaklarında yürüyüş yapmak İstanbul'da unutulmaz anılar yaratmak için fırsatlar sunar.
İstanbul'un büyülü atmosferini kendiniz yaşamak için hemen planınızı yapın! :flag-tr: #İstanbul #Türkiye #Seyahat #Tarih #Kültür #Gezi
📚 Documentation
Technical Specifications
Property | Details |
---|---|
Architecture | Decoder - only transformer |
Base Model | Google Gemma 2 9B |
Training Data | Over 500,000 specially selected Turkish instructions |
Language Support | Turkish (with comprehensive local context understanding) and other common languages |
Use Cases
- Text Generation and Editing
- Question Answering
- Summarization
- Analysis and Reasoning
- Content Transformation
- Turkish Natural Language Processing Tasks
- Turkish Culture
Advantages
- Local Understanding: Ability to comprehend Turkish culture, idioms, and current events.
- Resource Efficiency: Effective operation even with limited hardware resources.
- Flexible Deployment: Usable on desktop, laptop, or custom cloud infrastructure.
- Open Model: Transparent and customizable architecture.
About Google Gemma 2
Gemma is Google's family of lightweight, state - of - the - art open models, developed using the same research and technology used to create the Gemini models. These models are designed to be deployable in environments with limited resources, making AI technology accessible to everyone.
Performance and Limitations
While the model demonstrates high performance in Turkish language tasks, users should consider the following:
- Use clear and structured instructions for best results.
- Verify model outputs for critical applications.
- Evaluate resource requirements before deployment.
- Be aware that benchmarks below are represented in certain conditions and results can be replicated. Condition choices are explained below the table.
Benchmark Scores
Models | MMLU TR | TruthfulQA TR | ARC TR | HellaSwag TR | GSM8K TR | WinoGrande TR | Average |
---|---|---|---|---|---|---|---|
WiroAI/wiroai-turkish-llm-9b | 59.8 | 49.9 | 53.7 | 57.0 | 66.8 | 60.6 | 58.0 |
selimc/OrpoGemma-2-9B-TR | 53.0 | 54.3 | 52.4 | 52.0 | 64.8 | 58.9 | 55.9 |
Metin/Gemma-2-9b-it-TR-DPO-V1 | 51.3 | 54.7 | 52.6 | 51.2 | 67.1 | 55.2 | 55.4 |
CohereForAI/aya-expanse-8b | 52.3 | 52.8 | 49.3 | 56.7 | 61.3 | 59.2 | 55.3 |
ytu-ce-cosmos/Turkish-Llama-8b-DPO-v0.1 | 52.0 | 57.6 | 51.0 | 53.0 | 59.8 | 58.0 | 55.2 |
google/gemma-2-9b-it | 51.8 | 53.0 | 52.2 | 51.5 | 63.0 | 56.2 | 54.6 |
Eurdem/Defne-llama3.1-8B | 52.9 | 51.2 | 47.1 | 51.6 | 59.9 | 57.5 | 53.4 |
WiroAI/wiroai-turkish-llm-8b | 52.4 | 49.5 | 50.1 | 54 | 57.5 | 57.0 | 53.4 |
meta-llama/Meta-Llama-3-8B-Instruct | 52.2 | 49.2 | 44.2 | 49.2 | 56.0 | 56.7 | 51.3 |
Models Benchmarks are tested with
lm_eval --model_args pretrained=<model_path> --tasks mmlu_tr_v0.2,arc_tr-v0.2,gsm8k_tr-v0.2,hellaswag_tr-v0.2,truthfulqa_v0.2,winogrande_tr-v0.2
Please see https://github.com/malhajar17/lm-evaluation-harness_turkish and note that we move forward with default language inference which is the same approach in OpenLLMLeaderboard v2.0
🔧 Technical Details
The model is based on a decoder - only transformer architecture, using Google Gemma 2 9B as the base model. It has been fine - tuned on over 500,000 high - quality Turkish instructions, enabling it to have a deep understanding of the Turkish language and culture.
📄 License
This model is provided under Google's Gemma license. Please review and accept the license terms before use.
📫 Contact and Support
For questions, suggestions, and feedback, please open an issue on HuggingFace or contact us directly from our website.
Citation
@article{WiroAI,
title={WiroAI/wiroai-turkish-llm-9b},
author={Abdullah Bezir, Furkan Burhan Türkay, Cengiz Asmazoğlu},
year={2024},
url={https://huggingface.co/WiroAI/wiroai-turkish-llm-9b}
}
@article{gemma_2024,
title={Gemma},
url={https://www.kaggle.com/m/3301},
DOI={10.34740/KAGGLE/M/3301},
publisher={Kaggle},
author={Gemma Team},
year={2024}
}

