L

LGAI EXAONE EXAONE Deep 2.4B GGUF

Developed by bartowski
This is the quantized version of LGAI-EXAONE's EXAONE-Deep-2.4B model, quantized using llama.cpp, supporting English and Korean text generation tasks.
Downloads 304
Release Time : 3/18/2025

Model Overview

This model is the quantized version of EXAONE-Deep-2.4B, suitable for text generation tasks, with multiple quantization options to adapt to different hardware requirements.

Model Features

Multiple quantization options
Offers various quantization options from BF16 to Q2_K to accommodate different hardware and performance needs.
ARM/AVX optimization support
Supports online repacking feature to optimize performance on ARM and AVX machines.
High-quality quantization
Uses imatrix option for quantization, ensuring model quality remains close to the original version.

Model Capabilities

Text generation
Multilingual support
Quantized inference

Use Cases

Text generation
Multilingual text generation
Generate coherent text in English or Korean.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase