🚀 Unsloth: Fine-tune LLMs 2-5x Faster with 70% Less Memory!
Unsloth enables users to fine - tune large language models (LLMs) 2 - 5 times faster while using 70% less memory. It offers various quantization techniques and free notebooks for different models, making LLM fine - tuning more accessible and efficient.
🚀 Quick Start
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: Llama 3.2 (3B) Notebook
✨ Features
Free Fine - tuning Notebooks
- All notebooks are beginner - friendly. Just add your dataset, click "Run All", and you'll get a 2x faster finetuned model which can be exported to GGUF, vLLM or uploaded to Hugging Face.
| Model | Notebook Link | Performance | Memory use |
|----------|-------------------------------|-------------|----------|
| Llama - 3.2 (3B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Llama - 3.2 (11B vision) | ▶️ Start on Colab | 2x faster | 60% less |
| Qwen2 VL (7B) | ▶️ Start on Colab | 1.8x faster | 60% less |
| Llama - 3.1 (8B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Phi - 3.5 (mini) | ▶️ Start on Colab | 2x faster | 50% less |
| Gemma 2 (9B) | ▶️ Start on Colab | 2.4x faster | 58% less |
| Mistral (7B) | ▶️ Start on Colab | 2.2x faster | 62% less |
Notebook Use Cases
- The Llama 3.2 conversational notebook is useful for ShareGPT ChatML / Vicuna templates.
- The text completion notebook is for raw text. The [DPO notebook](https://colab.research.google.com/drive/15vttTpzzVXv_tJwEk - hIcQ0S9FcEWvwP?usp=sharing) replicates Zephyr.
- * Kaggle has 2x T4s, but we use 1. Due to overhead, 1x T4 is 5x faster.
Dynamic 4 - bit Quants
Unsloth's [Dynamic 4 - bit Quants](https://unsloth.ai/blog/dynamic - 4bit) is selectively quantized, greatly improving accuracy over standard 4 - bit.
Collection of Llama 3.1 Versions
See [our collection](https://huggingface.co/collections/unsloth/llama - 31 - collection - 6753dca76f47d9ce1696495f) for versions of Llama 3.1 including GGUF & 4 - bit formats.
📚 Documentation

Special Thanks
A huge thank you to the Meta and Llama team for creating and releasing these models.
📚 Documentation
Model Information
Property |
Details |
Model Type |
The Meta Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction - tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction - tuned text only models are optimized for multilingual dialogue use cases, including agentic retrieval and summarization tasks. They outperform many of the available open source and closed chat models on common industry benchmarks. |
Model Developer |
Meta |
Model Architecture |
Llama 3.2 is an auto - regressive language model that uses an optimized transformer architecture. The tuned versions use supervised fine - tuning (SFT) and reinforcement learning with human feedback (RLHF) to align with human preferences for helpfulness and safety. |
Supported Languages |
English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai are officially supported. Llama 3.2 has been trained on a broader collection of languages than these 8 supported languages. Developers may fine - tune Llama 3.2 models for languages beyond these supported languages, provided they comply with the Llama 3.2 Community License and the Acceptable Use Policy. Developers are always expected to ensure that their deployments, including those that involve additional languages, are completed safely and responsibly. |
Model Release Date |
Sept 25, 2024 |
Status |
This is a static model trained on an offline dataset. Future versions may be released that improve model capabilities and safety. |
License |
Use of Llama 3.2 is governed by the [Llama 3.2 Community License](https://github.com/meta - llama/llama - models/blob/main/models/llama3_2/LICENSE) (a custom, commercial license agreement). |
Where to send questions or comments about the model: Instructions on how to provide feedback or comments on the model can be found in the model [README](https://github.com/meta - llama/llama3). For more technical information about generation parameters and recipes for how to use Llama 3.1 in applications, please go [here](https://github.com/meta - llama/llama - recipes).
Community Links