Q

Qwen3 235B A22B INT4 W4A16

Developed by justinjja
Qwen3 is the latest generation large language model in the Tongyi Qianwen series, a 235B parameter Mixture of Experts (MoE) model, significantly reducing memory usage after INT4 quantization
Downloads 4,234
Release Time : 4/30/2025

Model Overview

235B parameter Mixture of Experts model, supports switching between thinking/non-thinking modes, with powerful reasoning, multilingual, and agent capabilities

Model Features

Dual-Mode Dynamic Switching
Exclusive support for seamless switching between thinking mode (complex reasoning) and non-thinking mode (efficient conversation) within a single model
Enhanced Reasoning Capabilities
Outperforms previous generation models in mathematics, code generation, and common-sense logical reasoning
Professional-Level Agent
Precisely interfaces with external tools, maintaining leadership in complex agent tasks
Efficient Quantization
INT4/W4A16 quantization significantly reduces memory requirements while maintaining original precision

Model Capabilities

Complex Logical Reasoning
Multi-Turn Dialogue
Multilingual Translation
Code Generation
Tool Invocation
Creative Writing
Role-Playing

Use Cases

Intelligent Assistant
Multi-Turn Dialogue System
Supports in-depth Q&A in thinking mode and efficient interaction in non-thinking mode
More natural and vivid conversation experience
Development Assistance
Code Generation and Explanation
Utilizes thinking mode for solving complex programming problems
Improves development efficiency
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase