Qwq Bakeneko 32b Gguf
基於rinna/qwq-bakeneko-32b使用llama.cpp量化的日語對話模型,兼容多數基於llama.cpp的應用
下載量 1,370
發布時間 : 3/12/2025
模型概述
這是一個32B參數的日語大語言模型,經過量化處理以在llama.cpp生態中使用,適用於日語文本生成和對話任務。
模型特點
日語優化
專門針對日語進行了持續預訓練和優化,在日語任務上表現優異
量化版本
使用llama.cpp進行量化,可在資源受限環境中高效運行
多輪對話能力
在日語MT-Bench多輪對話評估中獲得8.52分的高分
模型能力
日語文本生成
多輪對話
指令跟隨
知識問答
使用案例
對話系統
日語聊天機器人
構建流暢自然的日語對話系統
在日語MT-Bench評估中表現優異
內容創作
日語文章生成
幫助用戶生成日語文章、報告等內容
🚀 QwQ Bakeneko 32B GGUF (rinna/qwq-bakeneko-32b-gguf)
本項目是基於rinna/qwq-bakeneko-32b
模型的量化版本,使用llama.cpp
進行量化處理,可兼容眾多基於llama.cpp
的應用程序,為日語語言處理提供了高效且實用的解決方案。
🚀 快速開始
此模型是使用 llama.cpp 對 rinna/qwq-bakeneko-32b 進行量化後的模型,它與許多基於 llama.cpp 的應用程序兼容。
📚 詳細文檔
模型類型詳情
屬性 | 詳情 |
---|---|
模型類型 | 包含日語持續預訓練模型、指令調優模型、DeepSeek R1 蒸餾 Qwen2.5 合併推理模型、QwQ 合併推理模型、QwQ Bakeneko 合併指令調優模型等多種類型 |
具體模型 | |
- 日語持續預訓練模型:Qwen2.5 Bakeneko 32B [HF] | |
- 指令調優模型:Qwen2.5 Bakeneko 32B Instruct [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] | |
- DeepSeek R1 蒸餾 Qwen2.5 合併推理模型:DeepSeek R1 Distill Qwen2.5 Bakeneko 32B [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] | |
- QwQ 合併推理模型:QwQ Bakeneko 32B [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] | |
- QwQ Bakeneko 合併指令調優模型:Qwen2.5 Bakeneko 32B Instruct V2 [HF][AWQ][GGUF][GPTQ int8][GPTQ int4] |
有關模型架構和數據的詳細信息,請參閱 rinna/qwq-bakeneko-32b。
貢獻者
發佈日期
2025 年 3 月 13 日
基準測試
模型 | 日語 LM 評估套件 | 日語 MT-Bench(首輪) | 日語 MT-Bench(多輪) |
---|---|---|---|
Qwen/Qwen2.5-32B | 79.46 | - | - |
rinna/qwen2.5-bakeneko-32b | 79.18 | - | - |
Qwen/Qwen2.5-32B-Instruct | 78.29 | 8.13 | 7.54 |
rinna/qwen2.5-bakeneko-32b-instruct | 79.62 | 8.17 | 7.66 |
rinna/qwen2.5-bakeneko-32b-instruct-v2 | 77.92 | 8.86 | 8.53 |
deepseek-ai/DeepSeek-R1-Distill-Qwen-32B | 73.51 | 7.39 | 6.88 |
rinna/deepseek-r1-distill-qwen2.5-bakeneko-32b | 77.43 | 8.58 | 8.19 |
Qwen/QwQ-32B | 76.12 | 8.58 | 8.25 |
rinna/qwq-bakeneko-32b | 78.31 | 8.81 | 8.52 |
詳細的基準測試結果,請參考 rinna 的 LM 基準測試頁面(表 20250313)。
引用方式
@misc{rinna-qwq-bakeneko-32b-gguf,
title = {rinna/qwq-bakeneko-32b-gguf},
author = {Wakatsuki, Toshiaki and Chen, Xinqi and Sawada, Kei},
url = {https://huggingface.co/rinna/qwq-bakeneko-32b-gguf}
}
@inproceedings{sawada2024release,
title = {Release of Pre-Trained Models for the {J}apanese Language},
author = {Sawada, Kei and Zhao, Tianyu and Shing, Makoto and Mitsui, Kentaro and Kaga, Akio and Hono, Yukiya and Wakatsuki, Toshiaki and Mitsuda, Koh},
booktitle = {Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)},
month = {5},
year = {2024},
pages = {13898--13905},
url = {https://aclanthology.org/2024.lrec-main.1213},
note = {\url{https://arxiv.org/abs/2404.01657}}
}
參考文獻
@article{qwen2.5,
title = {Qwen2.5 Technical Report},
author = {An Yang and Baosong Yang and Beichen Zhang and Binyuan Hui and Bo Zheng and Bowen Yu and Chengyuan Li and Dayiheng Liu and Fei Huang and Haoran Wei and Huan Lin and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Yang and Jiaxi Yang and Jingren Zhou and Junyang Lin and Kai Dang and Keming Lu and Keqin Bao and Kexin Yang and Le Yu and Mei Li and Mingfeng Xue and Pei Zhang and Qin Zhu and Rui Men and Runji Lin and Tianhao Li and Tianyi Tang and Tingyu Xia and Xingzhang Ren and Xuancheng Ren and Yang Fan and Yang Su and Yichang Zhang and Yu Wan and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zihan Qiu},
journal = {arXiv preprint arXiv:2412.15115},
year = {2024}
}
@misc{qwq32b,
title = {QwQ-32B: Embracing the Power of Reinforcement Learning},
url = {https://qwenlm.github.io/blog/qwq-32b/},
author = {Qwen Team},
month = {March},
year = {2025}
}
@misc{deepseekai2025deepseekr1incentivizingreasoningcapability,
title = {DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning},
author = {DeepSeek-AI and Daya Guo and Dejian Yang and Haowei Zhang and Junxiao Song and Ruoyu Zhang and Runxin Xu and Qihao Zhu and Shirong Ma and Peiyi Wang and Xiao Bi and Xiaokang Zhang and Xingkai Yu and Yu Wu and Z. F. Wu and Zhibin Gou and Zhihong Shao and Zhuoshu Li and Ziyi Gao and Aixin Liu and Bing Xue and Bingxuan Wang and Bochao Wu and Bei Feng and Chengda Lu and Chenggang Zhao and Chengqi Deng and Chenyu Zhang and Chong Ruan and Damai Dai and Deli Chen and Dongjie Ji and Erhang Li and Fangyun Lin and Fucong Dai and Fuli Luo and Guangbo Hao and Guanting Chen and Guowei Li and H. Zhang and Han Bao and Hanwei Xu and Haocheng Wang and Honghui Ding and Huajian Xin and Huazuo Gao and Hui Qu and Hui Li and Jianzhong Guo and Jiashi Li and Jiawei Wang and Jingchang Chen and Jingyang Yuan and Junjie Qiu and Junlong Li and J. L. Cai and Jiaqi Ni and Jian Liang and Jin Chen and Kai Dong and Kai Hu and Kaige Gao and Kang Guan and Kexin Huang and Kuai Yu and Lean Wang and Lecong Zhang and Liang Zhao and Litong Wang and Liyue Zhang and Lei Xu and Leyi Xia and Mingchuan Zhang and Minghua Zhang and Minghui Tang and Meng Li and Miaojun Wang and Mingming Li and Ning Tian and Panpan Huang and Peng Zhang and Qiancheng Wang and Qinyu Chen and Qiushi Du and Ruiqi Ge and Ruisong Zhang and Ruizhe Pan and Runji Wang and R. J. Chen and R. L. Jin and Ruyi Chen and Shanghao Lu and Shangyan Zhou and Shanhuang Chen and Shengfeng Ye and Shiyu Wang and Shuiping Yu and Shunfeng Zhou and Shuting Pan and S. S. Li and Shuang Zhou and Shaoqing Wu and Shengfeng Ye and Tao Yun and Tian Pei and Tianyu Sun and T. Wang and Wangding Zeng and Wanjia Zhao and Wen Liu and Wenfeng Liang and Wenjun Gao and Wenqin Yu and Wentao Zhang and W. L. Xiao and Wei An and Xiaodong Liu and Xiaohan Wang and Xiaokang Chen and Xiaotao Nie and Xin Cheng and Xin Liu and Xin Xie and Xingchao Liu and Xinyu Yang and Xinyuan Li and Xuecheng Su and Xuheng Lin and X. Q. Li and Xiangyue Jin and Xiaojin Shen and Xiaosha Chen and Xiaowen Sun and Xiaoxiang Wang and Xinnan Song and Xinyi Zhou and Xianzu Wang and Xinxia Shan and Y. K. Li and Y. Q. Wang and Y. X. Wei and Yang Zhang and Yanhong Xu and Yao Li and Yao Zhao and Yaofeng Sun and Yaohui Wang and Yi Yu and Yichao Zhang and Yifan Shi and Yiliang Xiong and Ying He and Yishi Piao and Yisong Wang and Yixuan Tan and Yiyang Ma and Yiyuan Liu and Yongqiang Guo and Yuan Ou and Yuduan Wang and Yue Gong and Yuheng Zou and Yujia He and Yunfan Xiong and Yuxiang Luo and Yuxiang You and Yuxuan Liu and Yuyang Zhou and Y. X. Zhu and Yanhong Xu and Yanping Huang and Yaohui Li and Yi Zheng and Yuchen Zhu and Yunxian Ma and Ying Tang and Yukun Zha and Yuting Yan and Z. Z. Ren and Zehui Ren and Zhangli Sha and Zhe Fu and Zhean Xu and Zhenda Xie and Zhengyan Zhang and Zhewen Hao and Zhicheng Ma and Zhigang Yan and Zhiyu Wu and Zihui Gu and Zijia Zhu and Zijun Liu and Zilin Li and Ziwei Xie and Ziyang Song and Zizheng Pan and Zhen Huang and Zhipeng Xu and Zhongyu Zhang and Zhen Zhang},
year = {2025},
eprint = {2501.12948},
archivePrefix = {arXiv},
primaryClass = {cs.CL},
url = {https://arxiv.org/abs/2501.12948},
}
@inproceedings{hong2024orpo,
title = {ORPO: Monolithic Preference Optimization without Reference Model},
author = {Hong, Jiwoo and Lee, Noah and Thorne, James},
booktitle = {Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing},
pages = {11170--11189},
year = {2024}
}
@misc{llamacpp,
title = {llama.cpp},
author = {Gerganov, Georgi},
howpublished = {\url{https://github.com/ggerganov/llama.cpp}},
year = {2023}
}
📄 許可證
本項目採用 Apache 許可證 2.0 版。
Phi 2 GGUF
其他
Phi-2是微軟開發的一個小型但強大的語言模型,具有27億參數,專注於高效推理和高質量文本生成。
大型語言模型 支持多種語言
P
TheBloke
41.5M
205
Roberta Large
MIT
基於掩碼語言建模目標預訓練的大型英語語言模型,採用改進的BERT訓練方法
大型語言模型 英語
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT是BERT基礎模型的蒸餾版本,在保持相近性能的同時更輕量高效,適用於序列分類、標記分類等自然語言處理任務。
大型語言模型 英語
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct 是一個多語言大語言模型,針對多語言對話用例進行了優化,在常見的行業基準測試中表現優異。
大型語言模型 英語
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa是基於100種語言的2.5TB過濾CommonCrawl數據預訓練的多語言模型,採用掩碼語言建模目標進行訓練。
大型語言模型 支持多種語言
X
FacebookAI
9.6M
664
Roberta Base
MIT
基於Transformer架構的英語預訓練模型,通過掩碼語言建模目標在海量文本上訓練,支持文本特徵提取和下游任務微調
大型語言模型 英語
R
FacebookAI
9.3M
488
Opt 125m
其他
OPT是由Meta AI發佈的開放預訓練Transformer語言模型套件,參數量從1.25億到1750億,旨在對標GPT-3系列性能,同時促進大規模語言模型的開放研究。
大型語言模型 英語
O
facebook
6.3M
198
1
基於transformers庫的預訓練模型,適用於多種NLP任務
大型語言模型
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1是Meta推出的多語言大語言模型系列,包含8B、70B和405B參數規模,支持8種語言和代碼生成,優化了多語言對話場景。
大型語言模型
Transformers 支持多種語言

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
T5基礎版是由Google開發的文本到文本轉換Transformer模型,參數規模2.2億,支持多語言NLP任務。
大型語言模型 支持多種語言
T
google-t5
5.4M
702
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98