L

Llama 2 7B 32K

Developed by togethercomputer
An open-source long-context language model fine-tuned based on Meta's original Llama-2 7B model, supporting 32K context length
Downloads 5,411
Release Time : 7/26/2023

Model Overview

LLaMA-2-7B-32K is an open-source long-context language model developed by Together, extending the context length to 32K through positional interpolation technology, suitable for tasks such as multi-document QA and long text summarization.

Model Features

Extended Context
The model is trained to handle contexts up to 32K in length, a significant improvement over previous versions.
Pre-training and Instruction Tuning
Publicly available data recipe includes a mix of pre-training and instruction tuning data.
Fine-tuning Examples
Provides fine-tuning examples for specific applications, including book summarization and long-context QA.
Software Support
Updated inference and training frameworks support efficient inference and fine-tuning for 32K contexts.

Model Capabilities

Long text generation
Multi-document QA
Long text summarization
Instruction following

Use Cases

Academic Research
Multi-document QA
Identify and utilize the correct answer document from multiple Wikipedia document fragments.
Content Generation
Book Summarization
Generate chapter-level summaries for novels, plays, and other literary works for long narrative summarization tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase