C4ai Command A 03 2025
Cohere Labs Command A is a research version of a 111-billion parameter model with open weights, optimized for the demanding enterprise needs that require fast, secure, and high-quality AI.
Downloads 6,416
Release Time : 3/11/2025
Model Overview
Command A is an autoregressive language model that uses an optimized Transformer architecture and supports multilingual tasks and business-critical agent tasks.
Model Features
Efficient context processing
Supports a context length of 256K, suitable for processing long documents and complex dialogues.
Multilingual support
Supports 23 languages, covering the world's major languages.
Enterprise-level optimization
Optimized for enterprise needs, providing fast, secure, and high-quality AI services.
Retrieval Augmented Generation
Supports RAG tasks and can generate more accurate answers by combining external documents.
Tool usage ability
Can interact with external tools, such as APIs, databases, or search engines.
Model Capabilities
Text generation
Multilingual text generation
Dialogue systems
Retrieval Augmented Generation
Tool usage
Code generation
Code explanation
Code rewriting
Use Cases
Enterprise applications
Business-critical agent tasks
Used to handle critical business tasks in enterprises, such as customer support and data analysis.
Provide fast, secure, and high-quality AI services.
Multilingual customer support
Supports multilingual customer service to enhance the global customer experience.
Covers 23 languages to meet the needs of global customers.
Development tools
Code generation and explanation
Helps developers generate code snippets, explain code, or rewrite code.
Improve development efficiency and reduce coding errors.
Information retrieval
Retrieval Augmented Generation
Generate more accurate answers by combining external documents.
Provide more reliable information sources.
🚀 Model Card for Cohere Labs Command A
Cohere Labs Command A is an open weights research release of a 111 billion parameter model. It's optimized for demanding enterprises, offering fast, secure, and high - quality AI. Compared to other leading models, it delivers maximum performance with minimum hardware costs, excelling in business - critical agentic and multilingual tasks and can be deployed on just two GPUs.
🚀 Quick Start
To start using Cohere Labs Command A, you need to install transformers
from the source repository that includes the necessary changes for this model.
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereLabs/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
✨ Features
- Multilingual Support: Trained on 23 languages, including English, French, Spanish, Italian, German, Portuguese, Japanese, Korean, Arabic, Chinese, Russian, Polish, Turkish, Vietnamese, Dutch, Czech, Indonesian, Ukrainian, Romanian, Greek, Hindi, Hebrew, and Persian.
- Chat Capabilities: Configured as a conversational model by default, with options for non - interactive behavior and two safety modes (contextual and strict).
- RAG Capabilities: Supports Retrieval Augmented Generation (RAG), with an option to include citations in the response.
- Tool Use Capabilities: Can interact with external tools like APIs, databases, or search engines, and can also include citations in tool - related responses.
- Code Capabilities: Shows meaningful improvement in code capabilities, outperforming other models of similar size in enterprise - relevant scenarios such as SQL generation and code translation.
📦 Installation
Please install transformers
from the source repository that includes the necessary changes for this model.
# pip install transformers
💻 Usage Examples
Basic Usage
# pip install transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "CohereLabs/c4ai-command-a-03-2025"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
# Format message with the c4ai-command-a-03-2025 chat template
messages = [{"role": "user", "content": "Hello, how are you?"}]
input_ids = tokenizer.apply_chat_template(messages, tokenize=True, add_generation_prompt=True, return_tensors="pt")
gen_tokens = model.generate(
input_ids,
max_new_tokens=100,
do_sample=True,
temperature=0.3,
)
gen_text = tokenizer.decode(gen_tokens[0])
print(gen_text)
RAG Example
# Define conversation input
conversation = [{"role": "user", "content": "What has Man always dreamed of?"}]
# Define documents for retrieval-based generation
documents = [
{"heading": "The Moon: Our Age-Old Foe", "body": "Man has always dreamed of destroying the moon. In this essay, I shall..."},
{"heading": "Love is all you need", "body": "Man's dream has always been to find love. This profound lesson..."},
]
# Get the RAG prompt
input_prompt = tokenizer.apply_chat_template(
conversation=conversation,
documents=documents,
tokenize=False,
add_generation_prompt=True,
return_tensors="pt",
)
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
Tool Use Example
# Define tools
tools = [{
"type": "function",
"function": {
"name": "query_daily_sales_report",
"description": "Connects to a database to retrieve overall sales volumes and sales information for a given day.",
"parameters": {
"type": "object",
"properties": {
"day": {
"description": "Retrieves sales data for this day, formatted as YYYY-MM-DD.",
"type": "string",
}
},
"required": ["day"]
},
}
}]
# Define conversation input
conversation = [{"role": "user", "content": "Can you provide a sales summary for 29th September 2023?"}]
# Get the Tool Use prompt
input_prompt = tokenizer.apply_chat_template(conversation=conversation, tools=tools, tokenize=False, add_generation_prompt=True, return_tensors="pt")
# Tokenize the prompt
input_ids = tokenizer.encode_plus(input_prompt, return_tensors="pt")
📚 Documentation
- Model Summary:
- Developed by: Cohere and Cohere Labs
- Point of Contact: Cohere Labs
- License: CC - BY - NC, requires also adhering to Cohere Lab's Acceptable Use Policy
- Model: c4ai - command - a - 03 - 2025
- Model Size: 111 billion parameters
- Context length: 256K
- For more details on how this model was developed, check out our Tech Report.
- Note: The model supports a context length of 256K but it is configured in Hugging Face for 128K. This value can be updated in the configuration if needed.
- Model Details:
- Input: Models input text only.
- Output: Models generate text only.
- Model Architecture: An auto - regressive language model that uses an optimized transformer architecture. After pretraining, it uses supervised fine - tuning (SFT) and preference training. It features three layers with sliding window attention (window size 4096) and RoPE for efficient local context modeling and relative positional encoding. A fourth layer uses global attention without positional embeddings.
- Chat Capabilities: By default, configured as a conversational model. Can be adjusted for non - interactive behavior. Has two safety modes (contextual and strict).
- RAG Capabilities: Supports RAG, with an option to include citations.
- Tool Use Capabilities: Can interact with external tools and include citations in tool - related responses.
- Code Capabilities: Improved in code capabilities, suitable for enterprise - relevant code scenarios.
🔧 Technical Details
- Model Architecture: This is an auto - regressive language model that uses an optimized transformer architecture. After pretraining, this model uses supervised fine - tuning (SFT) and preference training to align model behavior to human preferences for helpfulness and safety. The model features three layers with sliding window attention (window size 4096) and RoPE for efficient local context modeling and relative positional encoding. A fourth layer uses global attention without positional embeddings, enabling unrestricted token interactions across the entire sequence.
- Context Length: Command A supports a context length of 256K.
📄 License
This model is governed by a CC - BY - NC, and also requires adhering to Cohere Lab's Acceptable Use Policy.
Other Information
- Model Card Contact: For errors or additional questions about details in this model card, contact labs@cohere.com
- Try Chat: You can try Command A chat in the playground here. You can also use it in our dedicated Hugging Face Space here.
- Citation:
@misc{cohere2025commandaenterprisereadylarge,
title={Command A: An Enterprise-Ready Large Language Model},
author={Team Cohere and Aakanksha and Arash Ahmadian and Marwan Ahmed and Jay Alammar and Yazeed Alnumay and Sophia Althammer and Arkady Arkhangorodsky and Viraat Aryabumi and Dennis Aumiller and Raphaël Avalos and Zahara Aviv and Sammie Bae and Saurabh Baji and Alexandre Barbet and Max Bartolo and Björn Bebensee and Neeral Beladia and Walter Beller-Morales and Alexandre Bérard and Andrew Berneshawi and Anna Bialas and Phil Blunsom and Matt Bobkin and Adi Bongale and Sam Braun and Maxime Brunet and Samuel Cahyawijaya and David Cairuz and Jon Ander Campos and Cassie Cao and Kris Cao and Roman Castagné and Julián Cendrero and Leila Chan Currie and Yash Chandak and Diane Chang and Giannis Chatziveroglou and Hongyu Chen and Claire Cheng and Alexis Chevalier and Justin T. Chiu and Eugene Cho and Eugene Choi and Eujeong Choi and Tim Chung and Volkan Cirik and Ana Cismaru and Pierre Clavier and Henry Conklin and Lucas Crawhall-Stein and Devon Crouse and Andres Felipe Cruz-Salinas and Ben Cyrus and Daniel D'souza and Hugo Dalla-Torre and John Dang and William Darling and Omar Darwiche Domingues and Saurabh Dash and Antoine Debugne and Théo Dehaze and Shaan Desai and Joan Devassy and Rishit Dholakia and Kyle Duffy and Ali Edalati and Ace Eldeib and Abdullah Elkady and Sarah Elsharkawy and Irem Ergün and Beyza Ermis and Marzieh Fadaee and Boyu Fan and Lucas Fayoux and Yannis Flet-Berliac and Nick Frosst and Matthias Gallé and Wojciech Galuba and Utsav Garg and Matthieu Geist and Mohammad Gheshlaghi Azar and Seraphina Goldfarb-Tarrant and Tomas Goldsack and Aidan Gomez and Victor Machado Gonzaga and Nithya Govindarajan and Manoj Govindassamy and Nathan Grinsztajn and Nikolas Gritsch and Patrick Gu and Shangmin Guo and Kilian Haefeli and Rod Hajjar and Tim Hawes and Jingyi He and Sebastian Hofstätter and Sungjin Hong and Sara Hooker and Tom Hosking and Stephanie Howe and Eric Hu and Renjie Huang and Hemant Jain and Ritika Jain and Nick Jakobi and Madeline Jenkins and JJ Jordan and Dhruti Joshi and Jason Jung and Trushant Kalyanpur and Siddhartha Rao Kamalakara and Julia Kedrzycki and Gokce Keskin and Edward Kim and Joon Kim and Wei-Yin Ko and Tom Kocmi and Michael Kozakov and Wojciech Kryściński and Arnav Kumar Jain and Komal Kumar Teru and Sander Land and Michael Lasby and Olivia Lasche and Justin Lee and Patrick Lewis and Jeffrey Li and Jonathan Li and Hangyu Lin and Acyr Locatelli and Kevin Luong and Raymond Ma and Lukas Mach and Marina Machado and Joanne Magbitang and Brenda Malacara Lopez and Aryan Mann and Kelly Marchisio and Olivia Markham and Alexandre Matton and Alex McKinney and Dominic McLoughlin and Jozef Mokry and Adrien Morisot and Autumn Moulder and Harry Moynehan and Maximilian Mozes and Vivek Muppalla and Lidiya Murakhovska and Hemangani Nagarajan and Alekhya Nandula and Hisham Nasir and Shauna Nehra and Josh Netto-Rosen and Daniel Ohashi and James Owers-Bardsley and Jason Ozuzu and Dennis Padilla and Gloria Park and Sam Passaglia and Jeremy Pekmez and Laura Penstone and Aleksandra Piktus and Case Ploeg and Andrew Poulton and Youran Qi and Shubha Raghvendra and Miguel Ramos and Ekagra Ranjan and Pierre Richemond and Cécile Robert-Michon and Aurélien Rodriguez and Sudip Roy and Laura Ruis and Louise Rust and Anubhav Sachan and Alejandro Salamanca and Kailash Karthik Saravanakumar and Isha Satyakam and Alice Schoenauer Sebag and Priyanka Sen and Sholeh Sepehri and Preethi Seshadri and Ye Shen and Tom Sherborne and Sylvie Chang Shi and Sanal Shivaprasad and Vladyslav Shmyhlo and Anirudh Shrinivason and Inna Shteinbuk and Amir Shukayev and Mathieu Simard and Ella Snyder and Ava Spataru and Victoria Spooner and Trisha Starostina and Florian Strub and Yixuan Su and Jimin Sun and Dwarak Talupuru and Eugene Tarassov and Elena Tommasone and Jennifer Tracey and Billy Trend and Evren Tumer and Ahmet Üstün and Bharat Venkitesh and David Venuto and Pat Verga and Maxime Voisin and Alex Wang and Donglu Wang and Shijian Wang and Edmond Wen and Naomi White and Jesse Willman and Marysia Winkels and Chen Xia and Jessica Xie and Minjie Xu and Bowen Yang and Tan Yi-Chern and Ivan Zhang and Zhenyu Zhao and Zhoujie Zhao},
year={2025},
eprint={2504.00698},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2504.00698},
}
Phi 2 GGUF
Other
Phi-2 is a small yet powerful language model developed by Microsoft, featuring 2.7 billion parameters, focusing on efficient inference and high-quality text generation.
Large Language Model Supports Multiple Languages
P
TheBloke
41.5M
205
Roberta Large
MIT
A large English language model pre-trained with masked language modeling objectives, using improved BERT training methods
Large Language Model English
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT is a distilled version of the BERT base model, maintaining similar performance while being more lightweight and efficient, suitable for natural language processing tasks such as sequence classification and token classification.
Large Language Model English
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct is a multilingual large language model optimized for multilingual dialogue use cases, excelling in common industry benchmarks.
Large Language Model English
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
9.6M
664
Roberta Base
MIT
An English pre-trained model based on Transformer architecture, trained on massive text through masked language modeling objectives, supporting text feature extraction and downstream task fine-tuning
Large Language Model English
R
FacebookAI
9.3M
488
Opt 125m
Other
OPT is an open pre-trained Transformer language model suite released by Meta AI, with parameter sizes ranging from 125 million to 175 billion, designed to match the performance of the GPT-3 series while promoting open research in large-scale language models.
Large Language Model English
O
facebook
6.3M
198
1
A pretrained model based on the transformers library, suitable for various NLP tasks
Large Language Model
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1 is Meta's multilingual large language model series, featuring 8B, 70B, and 405B parameter scales, supporting 8 languages and code generation, with optimized multilingual dialogue scenarios.
Large Language Model
Transformers Supports Multiple Languages

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
The T5 Base Version is a text-to-text Transformer model developed by Google with 220 million parameters, supporting multilingual NLP tasks.
Large Language Model Supports Multiple Languages
T
google-t5
5.4M
702
Featured Recommended AI Models