Gpt4 X Alpaca 13b Native 4bit 128g
A 13B-parameter language model fine-tuned with GPT4 and Alpaca instructions, supporting 4bit quantized inference
Downloads 344
Release Time : 4/1/2023
Model Overview
This is a large language model combining GPT4 capabilities with Alpaca instruction fine-tuning, suitable for natural language understanding and generation tasks. It achieves 4bit precision compression through GPTQ quantization technology, significantly reducing hardware requirements.
Model Features
4bit quantization
Uses GPTQ technology to achieve 4bit precision compression, greatly reducing VRAM requirements
Instruction fine-tuning
Fine-tuned on the GPTeacher dataset to enhance task execution capabilities
Dual-branch support
Provides both Triton and Cuda quantized versions to adapt to different runtime environments
Model Capabilities
Text generation
Instruction understanding
Q&A systems
Content creation
Use Cases
Education
Intelligent tutoring
Answers student questions and provides learning guidance
Content creation
Article generation
Generates coherent text content based on prompts
Featured Recommended AI Models