模型简介
模型特点
模型能力
使用案例
🚀 Codestral-22B-v0.1模型卡片
Codestral-22B-v0.1是一个在80多种编程语言的多样化数据集上训练的模型,涵盖了Python、Java、C、C++、JavaScript和Bash等最流行的语言。它可用于编码解码、推理等任务,能处理指令查询和中间填充(FIM)等场景。
🚀 快速开始
安装指南
建议使用 mistral-inference 来使用 mistralai/Codestral-22B-v0.1
。
pip install mistral_inference
下载模型
from huggingface_hub import snapshot_download
from pathlib import Path
mistral_models_path = Path.home().joinpath('mistral_models', 'Codestral-22B-v0.1')
mistral_models_path.mkdir(parents=True, exist_ok=True)
snapshot_download(repo_id="mistralai/Codestral-22B-v0.1", allow_patterns=["params.json", "consolidated.safetensors", "tokenizer.model.v3"], local_dir=mistral_models_path)
✨ 主要特性
- 支持80多种编程语言,可用于编码解码和推理任务。
- 支持指令查询,如回答代码片段相关问题、生成代码等。
- 支持中间填充(FIM),可预测前缀和后缀之间的中间标记。
💻 使用示例
基础用法
使用 mistral_common
进行编码和解码
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.protocol.instruct.messages import UserMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
mistral_models_path = "MISTRAL_MODELS_PATH"
tokenizer = MistralTokenizer.v3()
completion_request = ChatCompletionRequest(messages=[UserMessage(content="Explain Machine Learning to me in a nutshell.")])
tokens = tokenizer.encode_chat_completion(completion_request).tokens
高级用法
使用 mistral_inference
进行推理
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
model = Transformer.from_folder(mistral_models_path)
out_tokens, _ = generate([tokens], model, max_tokens=64, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
print(result)
使用Hugging Face transformers
进行推理
from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("mistralai/Codestral-22B-v0.1")
model.to("cuda")
generated_ids = model.generate(tokens, max_new_tokens=1000, do_sample=True)
# decode with mistral tokenizer
result = tokenizer.decode(generated_ids[0].tolist())
print(result)
聊天功能
安装 mistral_inference
后,环境中会有 mistral-chat
CLI 命令。
mistral-chat $HOME/mistral_models/Codestral-22B-v0.1 --instruct --max_tokens 256
会生成对 "Write me a function that computes fibonacci in Rust" 的回答,类似如下内容:
Sure, here's a simple implementation of a function that computes the Fibonacci sequence in Rust. This function takes an integer `n` as an argument and returns the `n`th Fibonacci number.
fn fibonacci(n: u32) -> u32 {
match n {
0 => 0,
1 => 1,
_ => fibonacci(n - 1) + fibonacci(n - 2),
}
}
fn main() {
let n = 10;
println!("The {}th Fibonacci number is: {}", n, fibonacci(n));
}
This function uses recursion to calculate the Fibonacci number. However, it's not the most efficient solution because it performs a lot of redundant calculations. A more efficient solution would use a loop to iteratively calculate the Fibonacci numbers.
中间填充(FIM)
安装 mistral_inference
并运行 pip install --upgrade mistral_common
确保安装了 mistral_common>=1.2
:
from mistral_inference.transformer import Transformer
from mistral_inference.generate import generate
from mistral_common.tokens.tokenizers.mistral import MistralTokenizer
from mistral_common.tokens.instruct.request import FIMRequest
tokenizer = MistralTokenizer.v3()
model = Transformer.from_folder("~/codestral-22B-240529")
prefix = """def add("""
suffix = """ return sum"""
request = FIMRequest(prompt=prefix, suffix=suffix)
tokens = tokenizer.encode_fim(request).tokens
out_tokens, _ = generate([tokens], model, max_tokens=256, temperature=0.0, eos_id=tokenizer.instruct_tokenizer.tokenizer.eos_id)
result = tokenizer.decode(out_tokens[0])
middle = result.split(suffix)[0].strip()
print(middle)
应该会输出类似如下内容:
num1, num2):
# Add two numbers
sum = num1 + num2
# return the sum
使用 transformers
库
此模型也兼容 transformers
库,先运行 pip install -U transformers
,然后使用以下代码快速开始:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "mistralai/Codestral-22B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
text = "Hello my name is"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=20)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
⚠️ 重要提示 欢迎提交PR来修正
transformers
分词器,使其结果与mistral_common
参考实现完全一致!
💡 使用建议 默认情况下,
transformers
会以全精度加载模型。因此,你可能有兴趣通过我们在HF生态系统中提供的优化来进一步降低运行模型的内存要求。
📚 详细文档
Codestral-22B-v0.1在多种编程语言的数据集上进行训练,可用于编码、解码和推理任务。它支持指令查询和中间填充(FIM)功能。更多详细信息可参考 博客文章。
🔧 技术细节
Codestral-22B-v0.1没有任何审核机制。我们期待与社区合作,使模型更好地遵守规则,以便在需要审核输出的环境中部署。
📄 许可证
Codestral-22B-v0.1基于 MNLP-0.1
许可证发布。你可以在 这里 查看完整的许可证内容。
开发团队
Albert Jiang, Alexandre Sablayrolles, Alexis Tacnet, Antoine Roux, Arthur Mensch, Audrey Herblin-Stoop, Baptiste Bout, Baudouin de Monicault, Blanche Savary, Bam4d, Caroline Feldman, Devendra Singh Chaplot, Diego de las Casas, Eleonore Arcelin, Emma Bou Hanna, Etienne Metzger, Gianna Lengyel, Guillaume Bour, Guillaume Lample, Harizo Rajaona, Henri Roussez, Jean-Malo Delignon, Jia Li, Justus Murke, Kartik Khandelwal, Lawrence Stewart, Louis Martin, Louis Ternon, Lucile Saulnier, Lélio Renard Lavaud, Margaret Jennings, Marie Pellat, Marie Torelli, Marie-Anne Lachaux, Marjorie Janiewicz, Mickael Seznec, Nicolas Schuhl, Patrick von Platen, Romain Sauvestre, Pierre Stock, Sandeep Subramanian, Saurabh Garg, Sophia Yang, Szymon Antoniak, Teven Le Scao, Thibaut Lavril, Thibault Schueller, Timothée Lacroix, Théophile Gervet, Thomas Wang, Valera Nemychnikova, Wendy Shang, William El Sayed, William Marshall
如果你想了解更多关于我们如何处理你的个人数据的信息,请阅读我们的 隐私政策。



