🚀 LLaMA-7B Model for Transformers/HuggingFace
This is a conversion of LLaMA-7B to work seamlessly with Transformers/HuggingFace. It operates under a special license. For detailed license information, please refer to the LICENSE file.
🚀 Quick Start
This README provides comprehensive details about the LLaMA-7B model, including its development, intended use, evaluation, and ethical considerations.
✨ Features
- Transformer-based Architecture: LLaMA is an auto-regressive language model built on the transformer architecture, available in different sizes (7B, 13B, 33B, and 65B parameters).
- Multilingual Support: Although the training data includes 20 languages, most of it is in English, which may result in better performance for English.
- Research-oriented: Primarily designed for research on large language models, including exploring applications, understanding capabilities and limitations, and evaluating biases.
📚 Documentation
Model Details
Intended Use
Primary Intended Uses
The primary use of LLaMA is research on large language models, including:
- Exploring potential applications such as question answering, natural language understanding, or reading comprehension.
- Understanding the capabilities and limitations of current language models and developing techniques to improve them.
- Evaluating and mitigating biases, risks, toxic and harmful content generations, and hallucinations.
Primary Intended Users
The primary intended users of the model are researchers in natural language processing, machine learning, and artificial intelligence.
Out-of-scope Use Cases
LLaMA is a base, or foundational, model. As such, it should not be used in downstream applications without further risk evaluation and mitigation. In particular, the model has not been trained with human feedback and can generate toxic or offensive content, incorrect information, or generally unhelpful answers.
Factors
Relevant Factors
One of the most relevant factors for which model performance may vary is the language used. Although 20 languages are included in the training data, most of the dataset is English text. Therefore, the model is expected to perform better for English than other languages. Relatedly, previous studies have shown that performance may vary for different dialects, and this is also expected for this model.
Evaluation Factors
Since the model is trained on web data, it is expected to reflect biases from this source. Therefore, it was evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance, and socio-economic status. The toxicity of model generations was also measured, depending on the toxicity of the context used to prompt the model.
Metrics
Model Performance Measures
The following measures are used to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender, and CrowS-Pairs.
- Exact match for question answering.
- The toxicity score from Perspective API on RealToxicityPrompts.
Decision Thresholds
Not applicable.
Approaches to Uncertainty and Variability
Due to the high computational requirements of training LLMs, only one model of each size was trained, and thus the variability of pre-training could not be evaluated.
Evaluation Datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
Training Dataset
The model was trained using the following data sources: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange [2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
Quantitative Analysis
Hyperparameters for the Model Architecture
LLaMA |
Model hyper parameters |
|
|
|
|
|
Number of parameters |
dimension |
n heads |
n layers |
Learn rate |
Batch size |
n tokens |
7B |
4096 |
32 |
32 |
3.0E-04 |
4M |
1T |
13B |
5120 |
40 |
40 |
3.0E-04 |
4M |
1T |
33B |
6656 |
52 |
60 |
1.5.E-04 |
4M |
1.4T |
65B |
8192 |
64 |
80 |
1.5.E-04 |
4M |
1.4T |
Table 1 - Summary of LLama Model Hyperparameters
Results on Reasoning Tasks
LLaMA |
Reasoning tasks |
|
|
|
|
|
|
|
|
Number of parameters |
BoolQ |
PIQA |
SIQA |
HellaSwag |
WinoGrande |
ARC-e |
ARC-c |
OBQA |
COPA |
7B |
76.5 |
79.8 |
48.9 |
76.1 |
70.1 |
76.7 |
47.6 |
57.2 |
93 |
13B |
78.1 |
80.1 |
50.4 |
79.2 |
73 |
78.1 |
52.7 |
56.4 |
94 |
33B |
83.1 |
82.3 |
50.4 |
82.8 |
76 |
81.4 |
57.8 |
58.6 |
92 |
65B |
85.3 |
82.8 |
52.3 |
84.2 |
77 |
81.5 |
56 |
60.2 |
94 |
Table 2 - Summary of LLama Model Performance on Reasoning tasks
Results on Bias
No |
Category |
FAIR LLM |
1 |
Gender |
70.6 |
2 |
Religion |
79 |
3 |
Race/Color |
57 |
4 |
Sexual orientation |
81 |
5 |
Age |
70.1 |
6 |
Nationality |
64.2 |
7 |
Disability |
66.7 |
8 |
Physical appearance |
77.8 |
9 |
Socioeconomic status |
71.5 |
|
LLaMA Average |
66.6 |
Table 3 - Summary bias of our model output
Ethical Considerations
Data
The data used to train the model is collected from various sources, mostly from the web. As such, it contains offensive, harmful, and biased content. Therefore, the model is expected to exhibit such biases from the training data.
Human Life
The model is not intended to inform decisions about matters central to human life and should not be used in such a way.
Mitigations
The web data was filtered based on its proximity to Wikipedia text and references. For this, a Kneser-Ney language model and a fastText linear classifier were used.
Risks and Harms
Risks and harms of large language models include the generation of harmful, offensive, or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. The model is not expected to be an exception in this regard.
Use Cases
LLaMA is a foundational model and should not be used for downstream applications without further investigation and mitigation of risks. These risks and potential fraught use cases include, but are not limited to, the generation of misinformation and the generation of harmful, biased, or offensive content.
📄 License
This model is under a non-commercial bespoke license. Please see the LICENSE file for details.