L

Llama 65b Instruct

Developed by upstage
A 65B-parameter instruction-tuned large language model developed by Upstage based on the LLaMA architecture, supporting long-text processing
Downloads 144
Release Time : 7/17/2023

Model Overview

This is a 65B-parameter instruction-tuned large language model developed based on Meta's LLaMA architecture, specifically optimized for instruction-following and long-text processing capabilities.

Model Features

Long-text processing capability
Supports context lengths exceeding 10k tokens through rope_scaling technology
Instruction optimization
Fine-tuned using Orca-style datasets to optimize instruction-following capabilities
Efficient inference
Supports 8-bit quantization loading to reduce hardware requirements

Model Capabilities

Text generation
Instruction following
Long-text comprehension
Q&A systems

Use Cases

Intelligent assistants
Multi-turn dialogue systems
Building intelligent assistants capable of understanding long conversation contexts
Knowledge Q&A
Complex question answering
Handling complex questions requiring long-context understanding
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase