M

Mythomax L2 13B AWQ

Developed by TheBloke
The AWQ quantized version of MythoMax L2 13B, which can effectively improve inference efficiency.
Downloads 1,555
Release Time : 9/19/2023

Model Overview

This is an AWQ quantized version based on Gryphe's MythoMax L2 13B model, mainly used for efficient inference tasks.

Model Features

Efficient Quantization
Adopts the AWQ quantization method, supporting 4-bit quantization, and can provide faster Transformer-based inference compared to GPTQ.
Multi-platform Support
Supports the continuous batch processing server vLLM and can achieve high-throughput concurrent inference in multi-user server scenarios.
Multiple Formats Available
Provides model files in multiple quantization formats such as AWQ, GPTQ, and GGUF, as well as the original unquantized fp16 model.

Model Capabilities

Text Generation
Efficient Inference
Multi-user Concurrent Processing

Use Cases

Text Generation
Dialogue Generation
Used to generate natural language dialogue responses.
Generate smooth and coherent dialogue content.
Content Creation
Used for creative content creation such as assisting in writing and story generation.
Generate creative text content.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase