# 7B Parameter Efficiency

Qwen2.5 7B Fuse Exp
This is a language model merged using the mergekit tool through the SCE method, combining multiple 7B-parameter scale models
Large Language Model Transformers
Q
bunnycore
22
2
Open Instruct Code Alpaca 7b
A 7B-parameter LLaMa model fine-tuned on the Code Alpaca dataset, specializing in code generation tasks
Large Language Model Transformers English
O
allenai
29
2
Llava Video 7B Qwen2 TPO
MIT
LLaVA-Video-7B-Qwen2-TPO is a video understanding model based on LLaVA-Video-7B-Qwen2 with temporal preference optimization, demonstrating excellent performance across multiple benchmarks.
Video-to-Text Transformers
L
ruili0
490
1
Dolphinhermespro ModelStock
Apache-2.0
This model is a hybrid created by merging the Dolphin-2.8 and Hermes-2-Pro 7B-parameter models using the LazyMerge toolkit, based on the Mistral-7B architecture.
Large Language Model Transformers
D
Kquant03
14
1
Calme 7B Instruct V0.9
Apache-2.0
Calme-7B is a 7-billion-parameter language model fine-tuned based on Mistral-7B, excelling in generating clear, peaceful, and coherent text.
Large Language Model Transformers
C
MaziyarPanahi
25
10
Percival 01 7b Slerp
Apache-2.0
Percival_01-7b-slerp is a 7B-parameter large language model ranked second on the OPENLLM leaderboard, obtained by merging the liminerity/M7-7b and Gille/StrangeMerges_32-7B-slerp models using the LazyMergekit tool.
Large Language Model Transformers
P
AurelPx
24
4
Synatra 7B V0.3 RP Mistral 7B Instruct V0.2 Slerp
Apache-2.0
This model is created by spherical linear interpolation (slerp) fusion between Mistral-7B instruction-tuned version and Synatra-7B role-playing version, combining both instruction understanding and role-playing capabilities
Large Language Model Transformers
S
MaziyarPanahi
25
1
Dpopenhermes 7B V2
Apache-2.0
DPOpenHermes 7B v2 is the second RL fine-tuned model based on OpenHermes-2.5-Mistral-7B, utilizing Direct Preference Optimization (DPO) for reinforcement learning with the Intel/orca_dpo_pairs and allenai/ultrafeedback_binarized_cleaned preference datasets.
Large Language Model Transformers English
D
openaccess-ai-collective
30
31
Synapsellm 7b Mistral V0.4 Preview2
Apache-2.0
SynapseLLM is a 7B-parameter large language model fine-tuned from Mistral by WebraftAI, specializing in code and general Q&A scenarios.
Large Language Model Transformers Supports Multiple Languages
S
WebraftAI
108
1
Amberchat
Apache-2.0
AmberChat is an instruction-following model fine-tuned from LLM360/Amber, belonging to the LLM360 Pebble model series.
Large Language Model Transformers English
A
LLM360
4,790
24
Llama2 7b Finance
MIT
A fine-tuned LLama 2 7b language model based on financial datasets, specifically designed for the financial domain, excelling in extracting, understanding, and generating financial-related text.
Large Language Model Transformers English
L
cxllin
228
19
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase