Mistral Small 3.1 24b Instruct 2503 Gguf
GGUF quantized version of Mistral Small 3.1 Instruct 24B, suitable for applications like llama.cpp, focusing on text generation tasks.
Downloads 17.91k
Release Time : 3/17/2025
Model Overview
This is a GGUF quantized version of the 24B-parameter Mistral model, suitable for text generation tasks and supports the Mistral chat template.
Model Features
GGUF Quantization Format
Converted to GGUF format, compatible with llama.cpp and related applications
24B Parameter Scale
A medium-scale language model with 24 billion parameters
Instruction Fine-tuned Version
Fine-tuned for instructions, making it more suitable for dialogue and instruction-following tasks
Model Capabilities
Text generation
Instruction following
Dialogue systems
Use Cases
Dialogue systems
Smart assistant
Can be used to build conversational AI assistants
Content generation
Text creation
Can be used for generating articles, stories, and other textual content
Featured Recommended AI Models
Š 2025AIbase