# Multi-task Instruction Fine-tuning

Qwen2.5 7B Fuse Exp
This is a language model merged using the mergekit tool through the SCE method, combining multiple 7B-parameter scale models
Large Language Model Transformers
Q
bunnycore
22
2
Llama 3.1 8B Instuct Uz Q8 0 GGUF
This is an 8B-parameter model based on the Llama-3.1 architecture, supporting instruction understanding and text generation tasks in Uzbek and English.
Large Language Model Supports Multiple Languages
L
azimjon
31
0
Blabbertron 1.2
Based on the Qwen2.5-7B-Instruct foundation model, it integrates the advantages of multiple 7B-scale models through model ensemble techniques.
Large Language Model Transformers
B
bunnycore
39
2
Multilingual E5 Large Instruct Q6 K GGUF
MIT
Multilingual E5 large instruction model supporting text embedding and classification tasks for over 100 languages
Large Language Model Supports Multiple Languages
M
kcccat
27
1
Cognitivecomputations Dolphin3.0 R1 Mistral 24B GGUF
Dolphin3.0-R1-Mistral-24B is a 24B-parameter large language model based on the Mistral architecture, trained by Eric Hartford, focusing on reasoning and first-principles analysis.
Large Language Model English
C
bartowski
10.24k
72
Llama 3 KafkaLM 8B V0.1
KafkaLM 8b is a German large language model fine-tuned from Llama3 8b, specializing in German business scenarios
Large Language Model Transformers Supports Multiple Languages
L
seedboxai
17
13
Meta Llama 3 70B
Meta's Llama 3 series of large language models, including 8B and 70B scale pre-trained and instruction-tuned generative text models, optimized for dialogue scenarios, with excellent performance in industry benchmark tests.
Large Language Model Transformers English
M
meta-llama
15.32k
857
Capytessborosyi 34B 200K DARE Ties
Other
This is a 34B-parameter large language model merged using the DARE Ties method via mergekit, based on the Yi-34B-200K architecture, integrating the capabilities of three models: Nous-Capybara-34B, Tess-M-v1.3, and airoboros-3_1-yi-34b-200k.
Large Language Model Transformers English
C
brucethemoose
88
16
Agentlm 7b
AgentLM-7B is an agent-enhanced language model obtained by mixed training on the AgentInstruct dataset and ShareGPT dataset based on the Llama-2-chat model.
Large Language Model Transformers
A
THUDM
196
51
Redpajama INCITE 7B Chat
Apache-2.0
A 6.9 billion parameter dialogue-specific language model developed by Together in collaboration with multiple AI research institutions, trained on the RedPajama-Data-1T dataset and enhanced with dialogue capabilities through fine-tuning with OASST1 and Dolly2 data
Large Language Model Transformers English
R
togethercomputer
178
93
Flan Ul2
Apache-2.0
An encoder-decoder model based on the T5 architecture, optimized through Flan prompt tuning, supporting multilingual task processing
Large Language Model Transformers Supports Multiple Languages
F
google
3,350
554
Flan T5 Xxl
Apache-2.0
FLAN-T5 is an instruction-fine-tuned language model based on T5, achieving superior performance through fine-tuning on over 1,000 multilingual tasks with the same parameter count
Large Language Model Supports Multiple Languages
F
google
157.41k
1,238
Flan T5 Large
Apache-2.0
FLAN-T5 is an instruction-fine-tuned language model based on T5, supporting 60+ languages, achieving stronger performance through fine-tuning on 1000+ tasks with the same parameter count
Large Language Model Supports Multiple Languages
F
google
589.25k
749
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase