Q

Qwen2.5 MOE 2X1.5B DeepSeek Uncensored Censored 4B Gguf

Developed by DavidAU
This is a Qwen2.5 MOE (Mixture of Experts) model, composed of two Qwen 2.5 DeepSeek (censored/regular and uncensored) 1.5B models, forming a 4B model where the uncensored version of DeepSeek Qwen 2.5 1.5B dominates the model's behavior.
Downloads 678
Release Time : 3/4/2025

Model Overview

This model possesses twice the reasoning/thinking capability of DeepSeek Distill and demonstrates outstanding performance for its scale. Suitable for various tasks such as text generation, problem-solving, and story creation.

Model Features

Mixture of Experts Architecture
Composed of two 1.5B models forming a 4B model, providing stronger reasoning and thinking capabilities.
Uncensored
The uncensored version of DeepSeek Qwen 2.5 1.5B dominates the model's behavior, suitable for broader use cases.
128k Context Length
Supports long-context processing, ideal for complex tasks and long-form content generation.
Mathematical/Logical Problem Solving
Based on Qwen's 1.5B mathematical model, more inclined toward mathematical/logical problem-solving and scientific applications.

Model Capabilities

Text Generation
Problem Solving
Brainstorming
Puzzle Solving
Story Generation
Plot Generation
Storytelling
Fictional Story Creation
Writing
Novel Creation

Use Cases

Creative Writing
Story Generation
Generate fictional stories or plots, such as conceptualizing episodes for 'Star Trek'.
Generate six story plots involving time travel.
Role-Playing Game Descriptions
Generate descriptive text for game scenarios, such as a quarrel between goblins in a cave.
Generate creative metaphorical descriptions related to the scene.
Problem Solving
Technical Solution Generation
Explain methods to reduce Earth's global temperature using nocturnal radiative cooling technology.
Generate detailed explanations of eight methods and techniques.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase