A

Arliai QwQ 32B ArliAI RpR V4 GGUF

Developed by bartowski
A 32B-parameter quantized large language model based on ArliAI/QwQ-32B-ArliAI-RpR-v4, quantized with llama.cpp at various precisions, suitable for text generation tasks.
Downloads 1,721
Release Time : 5/22/2025

Model Overview

This is a quantized 32B-parameter large language model that supports English text generation and is released under the Apache-2.0 license. The model offers multiple quantization versions from BF16 to IQ2_XXS, catering to different hardware configurations.

Model Features

Multiple quantization options
Offers over 20 quantization versions from high-precision BF16 to ultra-low-precision IQ2_XXS, meeting diverse hardware requirements.
Optimized inference performance
Supports ARM/AVX hardware acceleration and employs online repacking technology to enhance inference speed.
Embedded weight optimization
Some quantized versions (e.g., Q3_K_XL) use Q8_0 quantization for embedding and output weights to improve quality.
Broad compatibility
Can run in LM Studio, llama.cpp, and projects based on llama.cpp.

Model Capabilities

English text generation
Long-text processing
Dialogue systems

Use Cases

Text generation
Creative writing
Generate creative text content such as stories and poems.
Dialogue systems
Build intelligent conversational assistants.
Content creation
Article generation
Automatically generate various types of article content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase