G

Gemmax2 28 2B Gguf

Developed by Tonic
The GemmaX2-28-2B GGUF quantized model is a series of quantized variants based on GemmaX2-28-2B-v0.1, designed for multilingual machine translation and supports 28 languages.
Downloads 258
Release Time : 2/26/2025

Model Overview

This model is fine-tuned from GemmaX2-28-2B-Pretrain and specifically designed for multilingual machine translation. The GGUF quantized version optimizes the model for efficient inference in resource-constrained environments while retaining translation capabilities.

Model Features

Multilingual Support
Supports translation tasks in 28 languages, covering a wide range of linguistic needs.
Efficient Inference
Optimized through GGUF quantization, suitable for deployment in resource-constrained environments such as edge devices and low-memory systems.
Multiple Quantization Formats
Offers various quantization formats (f16, bf16, q8_0, tq1_0, tq2_0) to meet different precision and performance requirements.

Model Capabilities

Multilingual Translation
Efficient Inference
Quantization Optimization

Use Cases

Real-time Translation
Mobile Device Translation
Enables offline multilingual translation on mobile devices.
Efficient inference with low latency.
Research
Quantization Performance Research
Investigates the trade-off between quantization levels and translation performance.
Provides multiple quantization formats for selection.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase