# Lightweight instruction fine-tuning
Phi 3.5 Mini Instruct
MIT
Phi-3.5-mini-instruct is a lightweight and advanced open-source model built on the dataset used by Phi-3, focusing on high-quality, inference-rich data. It supports a 128K token context length and has powerful multilingual and long-context processing capabilities.
Large Language Model
Transformers Other

P
Lexius
129
1
Solarav2 Coder 0511
Apache-2.0
SolaraV2 is an upgraded version of the original Solara model. It is a lightweight, instruction-fine-tuned language model developed by high school students, suitable for daily conversations and educational tasks.
Large Language Model
Transformers English

S
summerstars
1,766
1
Tiny Random Llama 4
Apache-2.0
This is a lightweight version of Llama-4-Scout-17B-16E-Instruct, providing users with a more streamlined usage option.
Large Language Model
Transformers

T
llamafactory
1,736
0
Doge 160M Instruct
Apache-2.0
Doge 160M is a small language model based on dynamic masked attention mechanism, trained with supervised fine-tuning (SFT) and direct preference optimization (DPO).
Large Language Model
Transformers English

D
SmallDoge
2,223
12
Phi 3 Mini 4k Instruct Q4 K M GGUF
MIT
This model was converted from microsoft/Phi-3-mini-4k-instruct to GGUF format using llama.cpp via ggml.ai's GGUF-my-repo space.
Large Language Model Supports Multiple Languages
P
matrixportal
67
3
Phi 3 Mini 4k Instruct Onnx
MIT
Phi-3 Mini is a lightweight, cutting-edge open-source model focused on high-quality, high-inference-density data, supporting a 4K context length.
Large Language Model
Transformers

P
microsoft
370
137
Gemma 2b It Pytorch
Gemma is a lightweight open-source large language model developed by Google, offering a 2-billion-parameter version suitable for text generation tasks.
Large Language Model
G
google
76
11
Featured Recommended AI Models