Q

Qwen2.5 The Wisemen QwQ Deep Tiny Sherlock 32B

Developed by DavidAU
Based on the QwQ-32B reasoning and thinking model, it incorporates features from multiple top-tier reasoning models, focusing on reducing 'overthinking' in prompts, suitable for creative use cases and in-depth reasoning.
Downloads 763
Release Time : 4/25/2025

Model Overview

This model is a 32B-parameter large language model designed for reasoning and thinking tasks, enhancing reasoning depth and creative output quality by merging multiple top-tier reasoning models.

Model Features

Multi-Model Fusion
Combines the strengths of four top-tier reasoning models: QwQ-32B, DeepSeek-R1-Distill-Qwen-32B, TinyR1-32B-Preview, and Deductive-Reasoning-Qwen-32B
Reduced Overthinking
Minimizes 'overthinking' in prompts through model merging techniques, making outputs more direct and efficient
Creative Output Optimization
Especially suitable for creative use cases, capable of generating more in-depth and detailed reasoning content
128k Context
Supports a context window of up to 128k, ideal for handling long texts and complex reasoning tasks

Model Capabilities

Text Generation
Reasoning
Chain of Thought (CoT)
Creative Writing
Complex Problem Solving

Use Cases

Creative Writing
Horror Scene Generation
Generate vivid first-person horror scene descriptions
Capable of generating horror scenes with rich sensory details and tense atmospheres
Reasoning & Problem Solving
Puzzle Solving
Solve complex logic puzzles and reasoning problems
Capable of finding solutions to puzzles through step-by-step reasoning
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase