AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Small Parameters High Performance

# Small Parameters High Performance

Fairyr1 32B
Apache-2.0
FairyR1-32B is an efficient large language model based on DeepSeek-R1-Distill-Qwen-32B, optimized through distillation and merging processes, excelling in mathematical and programming tasks.
Large Language Model Transformers English
F
PKU-DS-LAB
372
85
TBAC VLR1 3B Preview
Apache-2.0
A multimodal language model fine-tuned by Tencent PCG Basic Algorithm Center, optimized based on Qwen2.5-VL-3B-Instruct, achieving state-of-the-art performance in multiple multimodal reasoning benchmarks among models of the same scale
Image-to-Text Safetensors English
T
TencentBAC
328
11
Hymba 1.5B Base
Other
Hymba-1.5B-Base is a foundational text generation model developed by NVIDIA, employing a hybrid architecture combining Mamba and attention heads, suitable for various natural language generation tasks.
Large Language Model Transformers
H
nvidia
3,492
142
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase