D

Deepseek Qwen Bllossom 32B

Developed by UNIVA-Bllossom
DeepSeek-qwen-Bllossom-32B is built upon the DeepSeek-R1-Distill-Qwen-32B model, aiming to enhance reasoning performance in Korean environments.
Downloads 167
Release Time : 4/7/2025

Model Overview

Through additional training, this model overcomes the performance degradation in Korean reasoning of the original base model. It conducts internal thought processes in English and outputs responses based on the user's input language, significantly improving reasoning performance in Korean contexts.

Model Features

Multilingual reasoning capability
Internal thought processes are conducted in English, with responses output in the user's input language, significantly enhancing Korean reasoning performance.
High-quality training data
Training data includes Korean-English bilingual reasoning datasets covering multiple domains, providing more accurate and reliable Korean reasoning results.
Efficient distillation method
Utilizes an efficient distillation approach to transfer excellent reasoning capabilities from large models to the base model, effectively addressing the original model's shortcomings.

Model Capabilities

Korean text generation
English text generation
Complex reasoning tasks
Multi-domain knowledge Q&A

Use Cases

Education
Math problem solving
Solves complex mathematical reasoning problems such as fraction calculations, algebraic problems, etc.
Achieved a score of 66.67 on the AIME24_ko benchmark, significantly outperforming the original model.
Research
Mathematical theorem proving
Provides multiple proof methods for mathematical theorems, such as proofs for the infinity of prime numbers.
Can offer various proof approaches, including Euclid's proof by contradiction, factorial method, and Euler's zeta function method.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
Š 2025AIbase