đ Falcon-H1 Transformers Library
The Falcon-H1 library in the transformers
ecosystem offers high - performance language models, suitable for a wide range of NLP tasks.
đ Quick Start
Currently, to use this model, you can rely on Hugging Face transformers
, vLLM
, or our custom fork of the llama.cpp
library.
Installation
Make sure to install the latest version of transformers
or vLLM
. You can install these packages from source:
pip install git+https://github.com/huggingface/transformers.git
Refer to the official vLLM documentation for more details on building vLLM from source.
Inference
đ¤ transformers
Refer to the snippet below to run H1 models using đ¤ transformers:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "tiiuae/Falcon-H1-1B-Base"
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.bfloat16,
device_map="auto"
)
vLLM
For vLLM, simply start a server by executing the command below:
# pip install vllm
vllm serve tiiuae/Falcon-H1-1B-Instruct --tensor-parallel-size 2 --data-parallel-size 1
llama.cpp
While we are working on integrating our architecture directly into the llama.cpp
library, you can install our fork of the library and use it directly: https://github.com/tiiuae/llama.cpp-Falcon-H1. Use the same installing guidelines as llama.cpp
.
⨠Features
- Model Type: Causal decoder - only
- Architecture: Hybrid Transformers + Mamba architecture
- Language(s) (NLP): English, Multilingual
- License: Falcon - LLM License
đĻ Installation
The installation steps are detailed in the "Quick Start" section. You can choose to install the necessary libraries from source or follow the official guidelines.
đģ Usage Examples
Basic Usage
The basic usage examples are provided in the "Quick Start" section, including using transformers
, vLLM
, and llama.cpp
for inference.
đ Documentation
Model Details
- Developed by: https://www.tii.ae
- Model type: Causal decoder - only
- Architecture: Hybrid Transformers + Mamba architecture
- Language(s) (NLP): English, Multilingual
- License: Falcon - LLM License
Training Details
For more details about the training protocol of this model, please refer to the Falcon - H1 technical blogpost.
Evaluation
Falcon - H1 series perform very well on a variety of tasks, including reasoning tasks.
Tasks |
Falcon - H1 - 34B |
Qwen3 - 32B |
Qwen2.5 - 72B |
Qwen2.5 - 32B |
Gemma3 - 27B |
Llama3.3 - 70B |
Llama4 - scout |
General |
|
|
|
|
|
|
|
BBH |
70.68 |
62.47 |
72.52 |
68.72 |
67.28 |
69.15 |
64.9 |
ARC - C |
61.01 |
48.98 |
46.59 |
44.54 |
54.52 |
63.65 |
56.14 |
TruthfulQA |
65.27 |
58.58 |
69.8 |
70.28 |
64.26 |
66.15 |
62.74 |
HellaSwag |
81.94 |
68.89 |
68.79 |
73.95 |
57.25 |
70.24 |
65.03 |
MMLU |
84.05 |
80.89 |
84.42 |
82.8 |
78.01 |
82.08 |
80.4 |
Math |
|
|
|
|
|
|
|
GSM8k |
83.62 |
88.78 |
82.26 |
78.47 |
90.37 |
93.71 |
90.37 |
MATH - 500 |
83.8 |
82.0 |
83.6 |
82.2 |
90.0 |
70.6 |
83.2 |
AMC - 23 |
69.38 |
67.34 |
67.34 |
68.75 |
77.81 |
39.38 |
69.06 |
AIME - 24 |
23.75 |
27.71 |
17.29 |
17.92 |
27.5 |
12.92 |
27.92 |
AIME - 25 |
16.67 |
19.79 |
15.21 |
11.46 |
22.71 |
1.25 |
8.96 |
Science |
|
|
|
|
|
|
|
GPQA |
41.53 |
30.2 |
37.67 |
34.31 |
36.49 |
31.99 |
31.8 |
GPQA_Diamond |
49.66 |
49.49 |
44.95 |
40.74 |
47.47 |
42.09 |
51.18 |
MMLU - Pro |
58.73 |
54.68 |
56.35 |
56.63 |
47.81 |
53.29 |
55.58 |
MMLU - stem |
83.57 |
81.64 |
82.59 |
82.37 |
73.55 |
74.88 |
75.2 |
Code |
|
|
|
|
|
|
|
HumanEval |
87.2 |
90.85 |
87.2 |
90.24 |
86.59 |
83.53 |
85.4 |
HumanEval + |
81.71 |
85.37 |
80.49 |
82.32 |
78.05 |
79.87 |
78.7 |
MBPP |
83.86 |
86.24 |
89.68 |
87.83 |
88.36 |
88.09 |
81.5 |
MBPP + |
71.43 |
71.96 |
75.4 |
74.07 |
74.07 |
73.81 |
64.8 |
LiveCodeBench |
49.71 |
45.01 |
54.6 |
49.12 |
39.53 |
40.31 |
40.12 |
CRUXEval |
73.07 |
78.45 |
75.63 |
73.5 |
74.82 |
69.53 |
68.32 |
Instruction Following |
|
|
|
|
|
|
|
IFEval |
89.37 |
86.97 |
86.35 |
81.79 |
83.19 |
89.94 |
86.32 |
Alpaca - Eval |
48.32 |
64.21 |
49.29 |
39.26 |
56.16 |
38.27 |
36.26 |
MTBench |
9.2 |
9.05 |
9.16 |
9.09 |
8.75 |
8.98 |
8.98 |
LiveBench |
46.26 |
63.05 |
54.03 |
52.92 |
55.41 |
53.11 |
54.21 |
You can check more in detail on our our release blogpost, detailed benchmarks.
Useful Links
đ§ Technical Details
The technical details, such as the training protocol, are provided in the Falcon - H1 technical blogpost.
đ License
The model is under the Falcon - LLM License. You can find the license details at [https://falconllm.tii.ae/falcon - terms - and - conditions.html](https://falconllm.tii.ae/falcon - terms - and - conditions.html)
đ Citation
If the Falcon - H1 family of models were helpful to your work, feel free to give us a cite.
@misc{tiifalconh1,
title = {Falcon - H1: A Family of Hybrid - Head Language Models Redefining Efficiency and Performance},
url = {https://falcon-lm.github.io/blog/falcon-h1},
author = {Falcon - LLM Team},
month = {May},
year = {2025}
}