# Multitasking

Tngtech.deepseek R1T Chimera GGUF
DeepSeek-R1T-Chimera is a text generation model developed based on tngtech's technology, focusing on efficient natural language processing tasks.
Large Language Model
T
DevQuasar
1,407
2
Andrewzh Absolute Zero Reasoner Coder 14b GGUF
Based on andrewzh's Absolute_Zero_Reasoner-Coder-14b model, this is a version quantized using llama.cpp's imatrix, suitable for reasoning and code generation tasks.
Large Language Model
A
bartowski
1,995
5
Apriel Nemotron 15b Thinker
MIT
A 15-billion-parameter efficient inference model launched by ServiceNow, with memory usage only half that of comparable advanced models
Large Language Model Transformers
A
ServiceNow-AI
1,252
86
Model
MIT
A multilingual transformer model based on the encoder-decoder architecture, supporting tasks such as text summarization, translation, and question-answering systems.
Large Language Model Transformers Other
M
miscovery
277
0
Qwen3 1.7B ONNX
Qwen3-1.7B is a 1.7B-parameter open-source large language model released by Alibaba Cloud, based on the Transformer architecture, supporting various natural language processing tasks.
Large Language Model Transformers
Q
onnx-community
189
1
Lughaat 1.0 8B Instruct
Apache-2.0
Lughaat-1.0-8B-Instruct is a large Urdu language model based on the Llama 3.1 8B architecture, specifically trained on the largest Urdu dataset and excels in Urdu language tasks.
Large Language Model Transformers Supports Multiple Languages
L
muhammadnoman76
42
2
Trendyol LLM 7B Chat V4.1.0
Apache-2.0
Trendyol LLM v4.1.0 is a generative model based on Trendyol LLM base v4.0 (a Qwen2.5 7B version further pre-trained on 13 billion tokens), specializing in e-commerce and Turkish language understanding.
Large Language Model Safetensors Other
T
Trendyol
854
25
Arcee Blitz
Apache-2.0
A 24B parameter model based on the Mistral architecture, distilled from the DeepSeek model, designed for speed and efficiency.
Large Language Model Transformers
A
arcee-ai
4,923
74
Qwen 0.5B DPO 5epoch
MIT
Transformers is an open-source library provided by Hugging Face for natural language processing (NLP) tasks, supporting various pretrained models.
Large Language Model Transformers
Q
JayHyeon
25
1
Qwen2.5 Aloe Beta 7B
Apache-2.0
Qwen2.5-Aloe-Beta-7B is an open-source large medical language model that achieves state-of-the-art performance in multiple medical tasks. It is fine-tuned based on the Qwen2.5-7B architecture, and the training data covers 1.8 billion tokens of diverse medical tasks.
Large Language Model Transformers English
Q
HPAI-BSC
631
5
Lumina Mgpt 7B 1024
Lumina-mGPT is a family of multimodal autoregressive models, excelling in generating flexible and realistic images from text descriptions and capable of performing various vision and language tasks.
Text-to-Image
L
Alpha-VLLM
27
9
Lumina Mgpt 7B 768 Omni
Lumina-mGPT is a series of multimodal autoregressive models, excelling in generating flexible and realistic images from text descriptions.
Text-to-Image Transformers
L
Alpha-VLLM
264
7
Persianllama 13B
The first groundbreaking large language model for Persian, with 13 billion parameters, trained on the Persian Wikipedia corpus, specifically designed for various natural language processing tasks.
Large Language Model Transformers Other
P
ViraIntelligentDataMining
3,291
11
Easy Ko Llama3 8b Instruct V1
Easy-Systems' first LLM model fine-tuned for Korean based on Llama3-8B-Instruct, supporting text generation tasks in both Korean and English.
Large Language Model Transformers Supports Multiple Languages
E
Easy-Systems
1,804
4
Llama Medx V3
Apache-2.0
This is a large language model based on the Hugging Face Transformers library, suitable for natural language processing tasks such as text generation, language translation, and question answering.
Large Language Model Transformers
L
skumar9
2,598
2
Bahasa 4b Chat
Other
A large Indonesian language model based on the qwen-4b model, further trained with 10 billion high-quality Indonesian texts
Large Language Model Transformers Other
B
Bahasalab
120
5
Prollama Stage 1
Apache-2.0
ProLLaMA is a protein large language model based on the Llama-2-7b architecture, specializing in multitask protein language processing.
Protein Model Transformers
P
GreatCaptainNemo
650
2
Phi 3 Mini 4k Instruct GGUF
MIT
Phi-3-Mini-4K-Instruct is a 3.8 billion parameter lightweight cutting-edge open-source model trained on the Phi-3 dataset, emphasizing high quality and inference-intensive characteristics.
Large Language Model
P
brittlewis12
170
1
Mamba 1.4b Instruct Hf
Insufficient model information, unable to provide specific introduction
Large Language Model Transformers
M
scottsus
60
1
Spivavtor Large
Spivavtor-Large is an instruction fine-tuned Ukrainian text editing model focused on tasks such as text rewriting, simplification, grammar correction, and coherence optimization.
Large Language Model Transformers Other
S
grammarly
169
9
Sanskritayam Gpt
This model is built based on the Transformers library, and its specific functions and uses require further information for confirmation.
Large Language Model Transformers
S
thtskaran
17
1
E.star.7.b
Apache-2.0
A 7B-parameter large language model based on the Mistral architecture, efficiently trained using Unsloth and TRL libraries, demonstrating excellent performance in multiple benchmarks.
Large Language Model Transformers English
E
liminerity
86
2
T LLaMA
T-LLaMA is a Tibetan large language model trained on the LLaMA2-7B model, constructed with a corpus containing 2.2 billion Tibetan characters, demonstrating excellent performance in text classification, generation, and summarization tasks.
Large Language Model Transformers Other
T
Pagewood
19
2
Gemma 7B Instruct Function Calling
CC
Gemma is a series of lightweight cutting-edge open-source large language models launched by Google, developed based on the Gemini technology framework, supporting English text generation tasks.
Large Language Model Transformers
G
InterSync
17
6
Gemma 2b
Gemma is a lightweight open-source large language model series launched by Google, built on the technology used to create Gemini models, offering a base version with 2 billion parameters.
Large Language Model
G
google
402.85k
994
Kafkalm 70B German V0.1 GGUF
KafkaLM 70B German V0.1 is a large German language model based on the Llama2 architecture, developed by Seedbox. This model is specifically optimized for German and is suitable for various text generation tasks.
Large Language Model German
K
TheBloke
1,826
33
Phixtral 2x2 8
MIT
phixtral-2x2_8 is the first Mixture of Experts (MoE) model built upon two microsoft/phi-2 models, outperforming each individual expert model.
Large Language Model Transformers Supports Multiple Languages
P
mlabonne
178
148
Kaori 70b V1
kaori-70b-v1 is a large language model based on the LLaMA2 architecture, fine-tuned by the Kaeri and Jenti teams using the Open-Platypus, dolphin, and OpenOrca datasets.
Large Language Model Transformers
K
KaeriJenti
907
2
Athnete 13B GPTQ
Athnete is a 13B-parameter large language model based on the Alpaca format, suitable for role-playing, emotional role-playing, and general purposes.
Large Language Model Transformers
A
TheBloke
24
4
Lamini T5 61M
LaMini-T5-61M is an instruction-following model based on the T5-small architecture, fine-tuned on the LaMini-instruction dataset with a parameter scale of 61M.
Large Language Model Transformers English
L
MBZUAI
1,287
18
Llama 7b Ru Turbo Alpaca Lora Merged
ru_turbo_alpaca is a Russian text generation model, fine-tuned based on the Alpaca model, suitable for Russian text generation tasks.
Large Language Model Transformers Other
L
IlyaGusev
50
10
Fi Core News Lg
A CPU-optimized Finnish language processing pipeline provided by spaCy, featuring comprehensive NLP capabilities including POS tagging, dependency parsing, and named entity recognition
Sequence Labeling Other
F
spacy
53
0
Fi Core News Sm
CPU-optimized Finnish language processing pipeline with NLP features including token classification and dependency parsing
Sequence Labeling Other
F
spacy
45
0
Sv Core News Lg
A Swedish natural language processing pipeline optimized for CPU, including complete NLP components such as part-of-speech tagging and named entity recognition
Sequence Labeling Other
S
spacy
56
0
Sv Core News Sm
Swedish small natural language processing model provided by spaCy, optimized for CPU, including complete NLP pipeline such as tokenization, part-of-speech tagging, and dependency parsing
Sequence Labeling Other
S
spacy
87
1
Vit5 Base
MIT
ViT5-base is a Transformer-based pre-trained encoder-decoder model for Vietnamese, supporting various text generation tasks.
Large Language Model Other
V
VietAI
7,170
11
Ru Core News Lg
MIT
Large Russian NLP model provided by spaCy, optimized for CPU, featuring a complete NLP processing pipeline
Sequence Labeling Other
R
spacy
74
8
T5 Small Standard Bahasa Cased
Pre-trained T5 small standard Malay language model supporting multiple natural language processing tasks.
Large Language Model Transformers Other
T
mesolitica
28
0
It Core News Sm
CPU-optimized Italian processing pipeline provided by spaCy, including NLP functionalities such as token classification, dependency parsing, and named entity recognition
Sequence Labeling Other
I
spacy
64
2
Da Dacy Medium Trf
Apache-2.0
DaCy is a Danish language processing framework featuring state-of-the-art pipelines and analysis capabilities for Danish language processing.
Sequence Labeling Other
D
chcaa
53
4
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase