🚀 Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_placid_whale
本模型是基於Transformer架構的微調語言模型,在問答、文本生成等自然語言處理任務中表現出色。它基於預訓練模型進一步優化,能更精準地理解和生成文本。
🚀 快速開始
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="gangchen/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_placid_whale", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
✨ 主要特性
🔧 技術細節
訓練方法
本模型使用GRPO方法進行訓練,該方法在論文 DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models 中被提出。
框架版本
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
📄 許可證
本模型遵循 license
許可證。
📚 詳細文檔
模型信息
屬性 |
詳情 |
基礎模型 |
Gensyn/Qwen2.5-0.5B-Instruct |
庫名稱 |
transformers |
模型名稱 |
Qwen2.5-0.5B-Instruct-Gensyn-Swarm-fierce_placid_whale |
標籤 |
generated_from_trainer, rl-swarm, grpo, gensyn, I am fierce placid whale, trl |
引用信息
引用GRPO
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
引用TRL
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}