# Text Generation Optimization
Sarvamai Sarvam M GGUF
Apache-2.0
This is a quantized version of the Sarvam-m model, supporting text generation tasks in multiple Indian languages and English.
Large Language Model Supports Multiple Languages
S
bartowski
845
1
Seed Coder 8B Instruct GGUF
MIT
This model has undergone self-quantization processing, with output and embedding tensors quantized to f16 format, and the remaining tensors quantized to q5_k or q6_k format, resulting in a smaller size while maintaining performance comparable to pure f16.
Large Language Model English
S
ZeroWw
434
1
Acip Llama2 13b
Compressible version of Llama-2-13b provided by the ACIP project, supporting dynamic adjustment of compression ratio
Large Language Model
Transformers English

A
MerantixMomentum
27
1
Academic Ds 9B
Apache-2.0
A 9-billion-parameter large language model based on the DeepSeek-V3 architecture, trained from scratch using a fully open-source and exclusively English dataset of over 350 billion tokens, specifically designed for open-source community development and debugging.
Large Language Model
Transformers English

A
ByteDance-Seed
39
3
MT3 Gen10 Gemma 2 9B
This is a merged model based on the Gemma-2-9B series, utilizing the DARE TIES method to combine multiple Gemma variants, aiming to enhance text generation capabilities.
Large Language Model
Transformers

M
zelk12
30
3
Fibonacci 2 14B
MIT
A large language model based on the Phi 4 architecture, with 14 billion parameters, optimized for natural language processing and text dialogue tasks.
Large Language Model Supports Multiple Languages
F
fibonacciai
97
13
Qwen2.5 7B Olm V1.5
Apache-2.0
An optimized layer merging (OLM) model based on Qwen2.5-7B, enhancing performance through automated layer reorganization techniques
Large Language Model
Transformers English

Q
jeffmeloy
123
3
Diffullama
Apache-2.0
Diffusion language model fine-tuned based on Llama-2-7b
Large Language Model
Transformers

D
diffusionfamily
10.88k
8
L3 8B Lunar Stheno
L3-8B-Lunar-Stheno is a model merged from L3-8B-Lunaris-v1 and L3-8B-Stheno-v3.2, addressing issues of overly long responses and insufficient initiative while enhancing context awareness and text generation capabilities.
Large Language Model
Transformers

L
HiroseKoichi
44
35
Prodigy 7B GGUF Imatrix
GGUF-Imatrix quantized version of Prodigy_7B, utilizing importance matrix technology to enhance quantization quality
Large Language Model
P
Lewdiculous
58
7
Laser Dolphin Mixtral 2x7b Dpo
Apache-2.0
A medium-scale Mixture of Experts (MoE) implementation based on Dolphin-2.6-Mistral-7B-DPO-Laser, with an average performance improvement of approximately 1 point in evaluations
Large Language Model
Transformers

L
macadeliccc
133
57
GPT Prompt Expansion Fooocus V2
A GPT2-based prompt expansion model designed to enhance the quality and diversity of text generation prompts
Large Language Model
Transformers

G
LykosAI
225
10
Distilroberta Base Finetuned Wikitext2
Apache-2.0
This model is a fine-tuned version of distilroberta-base on the wikitext2 dataset, primarily used for text generation tasks.
Large Language Model
Transformers

D
lamyae
79
0
Tinybert L 4 H 312 V2 Finetuned Wikitext103
This model is a fine-tuned version of TinyBERT_L-4_H-312_v2 on the wikitext dataset, primarily used for text-related tasks.
Large Language Model
Transformers

T
saghar
20
0
Tinybert General 6L 768D Finetuned Wikitext103
This model is a fine-tuned version of TinyBERT_General_6L_768D on the wikitext dataset, primarily used for text-related tasks.
Large Language Model
Transformers

T
saghar
16
0
Distilroberta Base Finetuned Wikitext2
Apache-2.0
This model is a fine-tuned version of distilroberta-base on the wikitext2 dataset, primarily used for text generation tasks.
Large Language Model
Transformers

D
Rawat29
47
0
T5 Small Paraphrase Pubmed
Apache-2.0
This model is a fine-tuned version of t5-small on an unknown dataset, primarily used for text rewriting tasks, specifically targeting PubMed-related texts.
Large Language Model
Transformers

T
gayanin
20
0
Simctg Wikitext103
GPT-2 language model trained with the SimCTG framework, using contrastive search to generate more coherent text
Large Language Model
Transformers

S
cambridgeltl
19
1
Distilroberta Base Finetuned Wikitext2
Apache-2.0
This model is a fine-tuned version of distilroberta-base on the wikitext2 dataset, primarily used for text generation tasks.
Large Language Model
Transformers

D
lucius
37
0
Distilroberta Base Finetuned Wikitext2
Apache-2.0
This model is a fine-tuned version of distilroberta-base on the wikitext2 dataset, primarily designed for text generation tasks.
Large Language Model
Transformers

D
Roy029
26
0
Distilroberta Base Finetuned Wikitext2
Apache-2.0
A fine-tuned version of the distilroberta-base model on the wikitext2 dataset, suitable for text-related tasks
Large Language Model
Transformers

D
Rocketknight1
17
0
Featured Recommended AI Models