T5 3b Q4 K M GGUF
This model is a quantized version converted from google-t5/t5-3b to GGUF format using llama.cpp via ggml.ai's GGUF-my-repo space.
Downloads 15
Release Time : 10/31/2024
Model Overview
This is a GGUF quantized version based on the T5-3B model, primarily used for summarization and translation tasks, supporting multiple languages.
Model Features
Multilingual support
Supports text processing in multiple languages including English, French, Romanian, German, and more.
GGUF quantized format
Utilizes GGUF format for quantization, optimizing model size and inference efficiency.
Based on T5 architecture
Built on Google's T5-3B model, offering excellent text generation and comprehension capabilities.
Model Capabilities
Text summarization
Multilingual translation
Text generation
Use Cases
Text processing
Automatic summarization
Generates automatic summaries for long texts, extracting key information.
Multilingual translation
Translates text from one language to another.
Featured Recommended AI Models
Š 2025AIbase