LFM2 is a new-generation hybrid model developed by Liquid AI, designed specifically for edge AI and device-side deployment, setting new standards in quality, speed, and memory efficiency.
LFM2 is a new type of hybrid Liquid model that adopts a multiplicative gate and short convolution architecture, suitable for edge AI and device-side deployment, and supports multiple languages and tasks.
Model Features
Fast Training and Inference
The training speed is 3 times faster than the previous generation model, and the decoding and pre-filling speeds on the CPU are 2 times faster than Qwen3.
Excellent Performance
Outperforms models of the same scale in benchmark tests such as knowledge, mathematics, instruction following, and multilingual capabilities.
Flexible Deployment
Can run efficiently on CPU, GPU, and NPU hardware, supporting devices such as smartphones, laptops, and vehicles.
New Architecture
Adopts a hybrid Liquid model with multiplicative gates and short convolutions, combining the advantages of convolution and attention mechanisms.
Model Capabilities
Text Generation
Multi-round Dialogue
Data Extraction
RAG
Creative Writing
Tool Call
Use Cases
Agent Tasks
Candidate Status Query
Query the current status of candidates in the recruitment process through tool calls.
Return information such as candidate status, position, and interview date.
Data Extraction
Structured Data Generation
Extract structured data from unstructured text.
Generate structured output in JSON or other formats.
Creative Writing
Story Generation
Generate a coherent story plot based on prompts.
Generate logical and creative text.
đ LFM2-1.2B
LFM2 is a new generation of hybrid models developed by Liquid AI, designed for edge AI and on - device deployment, setting new standards in quality, speed, and memory efficiency.
LFM2 is a new generation of hybrid models developed by Liquid AI, specifically designed for edge AI and on - device deployment. It sets a new standard in terms of quality, speed, and memory efficiency.
We're releasing the weights of three post - trained checkpoints with 350M, 700M, and 1.2B parameters. They provide the following key features to create AI - powered edge applications:
Fast training & inference â LFM2 achieves 3x faster training compared to its previous generation. It also benefits from 2x faster decode and prefill speed on CPU compared to Qwen3.
Best performance â LFM2 outperforms similarly - sized models across multiple benchmark categories, including knowledge, mathematics, instruction following, and multilingual capabilities.
New architecture â LFM2 is a new hybrid Liquid model with multiplicative gates and short convolutions.
Flexible deployment â LFM2 runs efficiently on CPU, GPU, and NPU hardware for flexible deployment on smartphones, laptops, or vehicles.
Find more information about LFM2 in our [blog post](https://www.liquid.ai/blog/liquid - foundation - models - v2 - our - second - series - of - generative - ai - models).
⨠Features
Fast training & inference: 3x faster training than the previous generation and 2x faster decode and prefill speed on CPU compared to Qwen3.
Best performance: Outperforms similar - sized models in multiple benchmark categories.
New architecture: A new hybrid Liquid model with multiplicative gates and short convolutions.
Flexible deployment: Efficiently runs on CPU, GPU, and NPU hardware for various devices.
đĻ Installation
You can run LFM2 with transformers and llama.cpp. vLLM support is coming.
1. Transformers
To run LFM2, you need to install Hugging Face transformers from source (v4.54.0.dev0).
You can update or install it with the following command: pip install "transformers @ git+https://github.com/huggingface/transformers.git@main".
2. Llama.cpp
You can run LFM2 with llama.cpp using its GGUF checkpoint. Find more information in the model card.
đģ Usage Examples
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model and tokenizer
model_id = "LiquidAI/LFM2-1.2B"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype="bfloat16",
trust_remote_code=True,
# attn_implementation="flash_attention_2" <- uncomment on compatible GPU
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
# Generate answer
prompt = "What is C. elegans?"
input_ids = tokenizer.apply_chat_template(
[{"role": "user", "content": prompt}],
add_generation_prompt=True,
return_tensors="pt",
tokenize=True,
).to(model.device)
output = model.generate(
input_ids,
do_sample=True,
temperature=0.3,
min_p=0.15,
repetition_penalty=1.05,
max_new_tokens=512,
)
print(tokenizer.decode(output[0], skip_special_tokens=False))
# <|startoftext|><|im_start|>user# What is C. elegans?<|im_end|># <|im_start|>assistant# C. elegans, also known as Caenorhabditis elegans, is a small, free-living# nematode worm (roundworm) that belongs to the phylum Nematoda.
You can directly run and test the model with this Colab notebook.
đ Documentation
đ Model details
Due to their small size, we recommend fine - tuning LFM2 models on narrow use cases to maximize performance.
They are particularly suited for agentic tasks, data extraction, RAG, creative writing, and multi - turn conversations.
However, we do not recommend using them for tasks that are knowledge - intensive or require programming skills.
Generation parameters: We recommend the following parameters:
temperature = 0.3
min_p = 0.15
repetition_penalty = 1.05
Chat template: LFM2 uses a ChatML - like chat template as follows:
<|startoftext|><|im_start|>system
You are a helpful assistant trained by Liquid AI.<|im_end|>
<|im_start|>user
What is C. elegans?<|im_end|>
<|im_start|>assistant
It's a tiny nematode that lives in temperate soil environments.<|im_end|>
You can apply it using the dedicated .apply_chat_template() function from Hugging Face transformers.
Tool use: It consists of four main steps:
Function definition: LFM2 takes JSON function definitions as input (JSON objects between <|tool_list_start|> and <|tool_list_end|> special tokens), usually in the system prompt.
Function call: LFM2 writes Pythonic function calls (a Python list between <|tool_call_start|> and <|tool_call_end|> special tokens), as the assistant answer.
Function execution: The function call is executed and the result is returned (string between <|tool_response_start|> and <|tool_response_end|> special tokens), as a "tool" role.
Final answer: LFM2 interprets the outcome of the function call to address the original user prompt in plain text.
Here is a simple example of a conversation using tool use:
<|startoftext|><|im_start|>system
List of tools: <|tool_list_start|>[{"name": "get_candidate_status", "description": "Retrieves the current status of a candidate in the recruitment process", "parameters": {"type": "object", "properties": {"candidate_id": {"type": "string", "description": "Unique identifier for the candidate"}}, "required": ["candidate_id"]}}]<|tool_list_end|><|im_end|>
<|im_start|>user
What is the current status of candidate ID 12345?<|im_end|>
<|im_start|>assistant
<|tool_call_start|>[get_candidate_status(candidate_id="12345")]<|tool_call_end|>Checking the current status of candidate ID 12345.<|im_end|>
<|im_start|>tool
<|tool_response_start|>{"candidate_id": "12345", "status": "Interview Scheduled", "position": "Clinical Research Associate", "date": "2023 - 11 - 20"}<|tool_response_end|><|im_end|>
<|im_start|>assistant
The candidate with ID 12345 is currently in the "Interview Scheduled" stage for the position of Clinical Research Associate, with an interview date set for 2023 - 11 - 20.<|im_end|>
Architecture: Hybrid model with multiplicative gates and short convolutions: 10 double - gated short - range LIV convolution blocks and 6 grouped query attention (GQA) blocks.
Pre - training mixture: Approximately 75% English, 20% multilingual, and 5% code data sourced from the web and licensed materials.
Training approach:
Knowledge distillation using [LFM1 - 7B](https://www.liquid.ai/blog/introducing - lfm - 7b - setting - new - standards - for - efficient - language - models) as teacher model.
Very large - scale SFT on 50% downstream tasks, 50% general domains.
Custom DPO with length normalization and semi - online datasets.
Iterative model merging.
đ§ How to fine - tune LFM2
We recommend fine - tuning LFM2 models on your use cases to maximize performance.
Notebook
Description
Link
SFT + LoRA
Supervised Fine - Tuning (SFT) notebook with a LoRA adapter in TRL.
DPO
Preference alignment with Direct Preference Optimization (DPO) in TRL.
đ Performance
LFM2 outperforms similar - sized models across different evaluation categories.