模型简介
模型特点
模型能力
使用案例
🚀 Cohere Labs Command R7B模型
Cohere Labs Command R7B是一个拥有70亿参数的模型,经过优化后具备先进的能力,适用于推理、总结、问答和代码等多种场景。该模型经过训练,能够执行复杂任务,如检索增强生成(RAG)和工具使用,还具备强大的智能代理能力,可通过多步骤使用和组合多种工具来完成更具挑战性的任务。它在企业相关的代码用例中表现出色,并且是一个支持23种语言的多语言模型。
🚀 快速开始
你可以在我们托管的Hugging Face Space中,在下载权重之前试用Cohere Labs Command R7B。
请从包含此模型必要更改的源仓库安装transformers:
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereLabs/c4ai-command-r7b-12-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the c4ai-command-r7b-12-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0], skip_special_tokens=True)
print(gen_text)
✨ 主要特性
- 多语言支持:支持英语、法语、德语、西班牙语、意大利语、葡萄牙语、日语、韩语、阿拉伯语、中文等23种语言。
- 先进能力:具备推理、总结、问答和代码等多种能力,还支持检索增强生成(RAG)和工具使用。
- 强大的智能代理能力:能够使用和组合多个工具,通过多步骤完成更困难的任务。
- 出色的代码性能:在企业相关的代码用例中表现出色。
📦 安装指南
请从包含此模型必要更改的源仓库安装transformers:
pip install 'git+https://github.com/huggingface/transformers.git'
💻 使用示例
基础用法
# pip install 'git+https://github.com/huggingface/transformers.git'
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereLabs/c4ai-command-r7b-12-2024"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the c4ai-command-r7b-12-2024 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0], skip_special_tokens=True)
print(gen_text)
高级用法 - RAG能力
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]
# Define documents for retrieval-based generation
documents = [
{"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
{"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."}
]
# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, documents=documents, tokenize=False, add_generation_prompt=True, return_tensors="pt")
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
你可以像平常一样从这个输入生成文本。
高级用法 - 工具使用能力
# Define tools
tools = [
{
"type": "function",
"function": {
"name": "query_daily_sales_report",
"description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
"parameters": {
"type": "object",
"properties": {
"day": {
"description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
"type": "string",
}
},
"required": ["day"]
},
}
}
]
# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]
# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt")
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
如果模型生成了计划和工具调用,你应该像这样将它们添加到聊天历史中:
tool_call = {"name": "query_daily_sales_report", "arguments": {"day": "2023-09-29"}}
tool_plan = "I will use the query_daily_sales_report tool to find the sales summary for 29th September 2023."
conversation.append({"role": "assistant", "tool_calls": [{ "id": "0", "type": "function", "function": tool_call},], "tool_plan": tool_plan})
然后调用工具并将结果以工具角色追加,如下所示:
# every tool result needs to be a dictionary!!
api_response_for_query_daily_sales_report = {"date": "2023-09-29", "summary": "Total Sales Amount: 10000, Total Units Sold: 250"}
# append tool results
conversation.append({"role": "tool", "tool_call_id": "0", "content": api_response_for_query_daily_sales_report}) # make sure "tool_call_id" matches the "id" of the tool_call
之后,你可以再次调用generate()
让模型在聊天中使用工具结果。
📚 详细文档
模型详情
属性 | 详情 |
---|---|
输入 | 模型仅接受文本输入。 |
输出 | 模型仅生成文本。 |
模型架构 | 这是一个自回归语言模型,使用了优化的Transformer架构。预训练后,该模型使用监督微调(SFT)和偏好训练,使模型行为符合人类对有用性和安全性的偏好。模型具有三层“滑动窗口注意力”(窗口大小4096)和“旋转位置编码(ROPE)”,用于高效的局部上下文建模和相对位置编码。第四层使用无位置嵌入的“全局注意力”,允许在整个序列中进行无限制的令牌交互。 |
支持语言 | 模型在23种语言上进行了训练:英语、法语、西班牙语、意大利语、德语、葡萄牙语、日语、韩语、阿拉伯语、中文、俄语、波兰语、土耳其语、越南语、荷兰语、捷克语、印尼语、乌克兰语、罗马尼亚语、希腊语、印地语、希伯来语和波斯语。 |
上下文长度 | Command R7B支持128K的上下文长度。 |
全面的模型
Command R7B在标准化和可外部验证的基准测试中表现出色,例如HuggingFace Open LLM Leaderboard。与其他类似规模的开放权重模型相比,Command R7B在所有任务中都表现出色,排名第一。
Command R7B | Gemma 2 IT 9B | Ministral 8B | Llama 3.1 8B | Qwen 2.5 7B | Tulu 3 8B | |
---|---|---|---|---|---|---|
平均 | 31.4 | 28.9 | 22 | 28.2 | 26.87 | 26.03 |
IFEval | 77.9 | 74.4 | 58.96 | 78.6 | 75.85 | 82.67 |
BBH | 36.1 | 42.1 | 25.82 | 29.9 | 34.89 | 16.67 |
MATH hard | 26.4 | 0.2 | 6.5 | 19.3 | 0.0 | 19.64 |
GPQA | 7.7 | 14.8 | 4.5 | 2.4 | 5.48 | 6.49 |
MUSR | 11.6 | 9.74 | 10.7 | 8.41 | 8.45 | 10.45 |
MMLU - Pro | 28.5 | 32 | 25.5 | 30.7 | 36.52 | 20.3 |
HuggingFace排行榜评估结果。竞争对手的数据来自官方排行榜。Command R7B的结果由我们使用官方HuggingFace提示和评估代码计算得出。
聊天能力
Command R7B可以配置为对话模型和指令模型。
- 对话模式:使模型具备交互行为,即期望它以对话方式回复,提供介绍性陈述和后续问题,并在适当的地方使用Markdown和LaTeX。它针对交互式体验进行了优化,例如聊天机器人,模型可以参与对话。
- 指令模式:使模型提供简洁而全面的响应,默认情况下不使用Markdown / LaTeX。它专为非交互式、以任务为中心的用例而设计,如提取信息、总结文本、翻译和分类。
注意:默认情况下,Command R7B不提供系统前置信息。我们建议按照文档中所述添加对话或指令前置信息。
RAG能力
Command R7B专门针对检索增强生成(RAG)的最终步骤等任务进行了训练。 通过Transformers中的聊天模板支持使用Command R7B进行RAG。模型将对话作为输入(可选的用户提供的系统前置信息),以及一个文档片段列表。
文档片段应该是短块,而不是长文档,通常每个块约100 - 400个单词,格式为键值对。键应该是简短的描述性字符串,值可以是文本或半结构化的。
你可能会发现,直接在用户消息中包含相关文档与使用文档参数渲染特殊RAG模板的效果一样好,甚至更好。RAG模板通常是一个很好的默认选择。我们鼓励用户尝试两种方式,并评估哪种模式最适合他们的特定用例。
工具使用能力
Command R7B经过专门训练,具备对话式工具使用能力。这允许模型与外部工具(如API、数据库或搜索引擎)进行交互。 通过Transformers中的聊天模板支持使用Command R7B进行工具使用。我们建议使用JSON模式提供工具描述。
代码能力
Command R7B在代码能力方面有显著提升。除了学术代码基准测试外,我们还在与企业相关的场景中对其进行了评估,包括SQL和代码翻译,它在这些方面优于其他类似规模的模型。你可以通过请求代码片段、代码解释或代码重写来尝试这些功能。为了获得更好的性能,我们还建议在与代码生成相关的指令中使用较低的温度(甚至贪婪解码)。
🔧 技术细节
该模型使用优化的Transformer架构,经过预训练后,使用监督微调(SFT)和偏好训练,使模型行为符合人类对有用性和安全性的偏好。模型具有三层“滑动窗口注意力”(窗口大小4096)和“旋转位置编码(ROPE)”,用于高效的局部上下文建模和相对位置编码。第四层使用无位置嵌入的“全局注意力”,允许在整个序列中进行无限制的令牌交互。
📄 许可证
本模型受CC - BY - NC许可协议的约束,同时还需要遵守Cohere Lab的可接受使用政策。
其他信息
模型卡片联系信息
如果你对本模型卡片中的细节有错误反馈或额外问题,请联系labs@cohere.com。
试用聊天
你可以在这里的 playground 中试用Command R7B聊天。你也可以在我们专门的Hugging Face Space 这里使用它。
引用
@misc{cohere2025commandaenterprisereadylarge,
title={Command A: An Enterprise-Ready Large Language Model},
author={Team Cohere and Aakanksha and Arash Ahmadian and Marwan Ahmed and Jay Alammar and Yazeed Alnumay and Sophia Althammer and Arkady Arkhangorodsky and Viraat Aryabumi and Dennis Aumiller and Raphaël Avalos and Zahara Aviv and Sammie Bae and Saurabh Baji and Alexandre Barbet and Max Bartolo and Björn Bebensee and Neeral Beladia and Walter Beller-Morales and Alexandre Bérard and Andrew Berneshawi and Anna Bialas and Phil Blunsom and Matt Bobkin and Adi Bongale and Sam Braun and Maxime Brunet and Samuel Cahyawijaya and David Cairuz and Jon Ander Campos and Cassie Cao and Kris Cao and Roman Castagné and Julián Cendrero and Leila Chan Currie and Yash Chandak and Diane Chang and Giannis Chatziveroglou and Hongyu Chen and Claire Cheng and Alexis Chevalier and Justin T. Chiu and Eugene Cho and Eugene Choi and Eujeong Choi and Tim Chung and Volkan Cirik and Ana Cismaru and Pierre Clavier and Henry Conklin and Lucas Crawhall-Stein and Devon Crouse and Andres Felipe Cruz-Salinas and Ben Cyrus and Daniel D'souza and Hugo Dalla-Torre and John Dang and William Darling and Omar Darwiche Domingues and Saurabh Dash and Antoine Debugne and Théo Dehaze and Shaan Desai and Joan Devassy and Rishit Dholakia and Kyle Duffy and Ali Edalati and Ace Eldeib and Abdullah Elkady and Sarah Elsharkawy and Irem Ergün and Beyza Ermis and Marzieh Fadaee and Boyu Fan and Lucas Fayoux and Yannis Flet-Berliac and Nick Frosst and Matthias Gallé and Wojciech Galuba and Utsav Garg and Matthieu Geist and Mohammad Gheshlaghi Azar and Seraphina Goldfarb-Tarrant and Tomas Goldsack and Aidan Gomez and Victor Machado Gonzaga and Nithya Govindarajan and Manoj Govindassamy and Nathan Grinsztajn and Nikolas Gritsch and Patrick Gu and Shangmin Guo and Kilian Haefeli and Rod Hajjar and Tim Hawes and Jingyi He and Sebastian Hofstätter and Sungjin Hong and Sara Hooker and Tom Hosking and Stephanie Howe and Eric Hu and Renjie Huang and Hemant Jain and Ritika Jain and Nick Jakobi and Madeline Jenkins and JJ Jordan and Dhruti Joshi and Jason Jung and Trushant Kalyanpur and Siddhartha Rao Kamalakara and Julia Kedrzycki and Gokce Keskin and Edward Kim and Joon Kim and Wei-Yin Ko and Tom Kocmi and Michael Kozakov and Wojciech Kryściński and Arnav Kumar Jain and Komal Kumar Teru and Sander Land and Michael Lasby and Olivia Lasche and Justin Lee and Patrick Lewis and Jeffrey Li and Jonathan Li and Hangyu Lin and Acyr Locatelli and Kevin Luong and Raymond Ma and Lukas Mach and Marina Machado and Joanne Magbitang and Brenda Malacara Lopez and Aryan Mann and Kelly Marchisio and Olivia Markham and Alexandre Matton and Alex McKinney and Dominic McLoughlin and Jozef Mokry and Adrien Morisot and Autumn Moulder and Harry Moynehan and Maximilian Mozes and Vivek Muppalla and Lidiya Murakhovska and Hemangani Nagarajan and Alekhya Nandula and Hisham Nasir and Shauna Nehra and Josh Netto-Rosen and Daniel Ohashi and James Owers-Bardsley and Jason Ozuzu and Dennis Padilla and Gloria Park and Sam Passaglia and Jeremy Pekmez and Laura Penstone and Aleksandra Piktus and Case Ploeg and Andrew Poulton and Youran Qi and Shubha Raghvendra and Miguel Ramos and Ekagra Ranjan and Pierre Richemond and Cécile Robert-Michon and Aurélien Rodriguez and Sudip Roy and Laura Ruis and Louise Rust and Anubhav Sachan and Alejandro Salamanca and Kailash Karthik Saravanakumar and Isha Satyakam and Alice Schoenauer Sebag and Priyanka Sen and Sholeh Sepehri and Preethi Seshadri and Ye Shen and Tom Sherborne and Sylvie Chang Shi and Sanal Shivaprasad and Vladyslav Shmyhlo and Anirudh Shrinivason and Inna Shteinbuk and Amir Shukayev and Mathieu Simard and Ella Snyder and Ava Spataru and Victoria Spooner and Trisha Starostina and Florian Strub and Yixuan Su and Jimin Sun and Dwarak Talupuru and Eugene Tarassov and Elena Tommasone and Jennifer Tracey and Billy Trend and Evren Tumer and Ahmet Üstün and Bharat Venkitesh and David Venuto and Pat Verga and Maxime Voisin and Alex Wang and Donglu Wang and Shijian Wang and Edmond Wen and Naomi White and Jesse Willman and Marysia Winkels and Chen Xia and Jessica Xie and Minjie Xu and Bowen Yang and Tan Yi-Chern and Ivan Zhang and Zhenyu Zhao and Zhoujie Zhao},
year={2025},
eprint={2504.00698},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.00698},
}



