T

T LLaMA

Developed by Pagewood
T-LLaMA is a Tibetan large language model trained on the LLaMA2-7B model, constructed with a corpus containing 2.2 billion Tibetan characters, demonstrating excellent performance in text classification, generation, and summarization tasks.
Downloads 19
Release Time : 3/16/2024

Model Overview

T-LLaMA is a large language model specifically designed for Tibetan language processing, trained on the LLaMA2-7B architecture, supporting various Tibetan text processing tasks.

Model Features

Large-scale Tibetan Corpus
Constructed a corpus containing 2.2 billion Tibetan characters, providing rich data support for model training.
Based on LLaMA2 Architecture
Built on the advanced LLaMA2-7B model architecture, offering excellent language understanding and generation capabilities.
Multitasking Support
Supports various Tibetan processing tasks such as text classification, text generation, and text summarization.

Model Capabilities

Tibetan text classification
Tibetan text generation
Tibetan text summarization

Use Cases

Text Processing
Tibetan Text Classification
Classify Tibetan texts
Achieved 79.8% accuracy on the TNCC dataset
Tibetan Text Generation
Generate contextually appropriate Tibetan texts
Achieved good results
Tibetan Text Summarization
Generate summaries for Tibetan texts
Achieved good results
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase