M

Mythomax L2 13B GGUF FP16

Developed by py-sandy
This is the floating-point 16-bit version of Gryphe's MythoMax-L2-13B model converted to GGUF, designed for full-quality local inference on high-video-memory GPUs.
Downloads 380
Release Time : 5/3/2025

Model Overview

This model retains full precision (floating-point 16-bit) and is suitable for fine-tuning instruction tasks, role-playing and creative writing, emotionally nuanced conversations, and full-context output experiments.

Model Features

Full precision retention
Uses floating-point 16-bit precision, suitable for high-quality local inference
Large context support
Supports a context length of 4096+ tokens
High-quality output
Particularly suitable for text generation tasks that require emotional nuance and creativity

Model Capabilities

Text generation
Instruction fine-tuning
Role-playing
Creative writing
Emotional conversation

Use Cases

Creative writing
Role-playing
Generate character dialogues with rich emotions and personalities
Produce natural and smooth character interactions
Story creation
Assist in the creation of long stories or novels
Generate coherent and creative narrative content
Dialogue system
Emotional conversation
Conduct emotionally rich and empathetic conversations
Produce natural and emotionally resonant conversations
Experimental research
Long context experiment
Test the model's performance in a long context environment
Support a context memory of 4096+ tokens
Featured Recommended AI Models
ยฉ 2025AIbase