🚀 LLaMA-7B for Transformers/HuggingFace
This is a conversion of LLaMA-7B to work with Transformers/HuggingFace. It operates under a special license. Refer to the LICENSE file for detailed information.
🚀 Quick Start
LLaMA-7B has been converted to be compatible with Transformers/HuggingFace. However, specific quick - start code examples are not provided in the original document.
✨ Features
- Model Adaptation: Converted to work seamlessly with Transformers/HuggingFace.
- Multiple Sizes: Available in different parameter sizes (7B, 13B, 33B, 65B) for various research needs.
📚 Documentation
Model details
Intended use
Primary intended uses
The primary use of LLaMA is research on large language models, including:
- Exploring potential applications such as question answering, natural language understanding or reading comprehension.
- Understanding capabilities and limitations of current language models, and developing techniques to improve those.
- Evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
Primary intended users
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
Out - of - scope use cases
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
Factors
Relevant factors
One of the most relevant factors for which model performance may vary is which language is used. Although 20 languages were included in the training data, most of the dataset is made of English text, and the model is expected to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and this is expected to be the case for this model.
Evaluation factors
As the model is trained on data from the Web, it is expected to reflect biases from this source. Thus, it was evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio - economic status. The toxicity of model generations was also measured, depending on the toxicity of the context used to prompt the model.
Metrics
Model performance measures
The following measures are used to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG - bench hard, WinoGender and CrowS - Pairs.
- Exact match for question answering.
- The toxicity score from Perspective API on RealToxicityPrompts.
Decision thresholds
Not applicable.
Approaches to uncertainty and variability
Due to the high computational requirements of training LLMs, only one model of each size was trained, and thus the variability of pre - training could not be evaluated.
Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG - bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS - Pairs.
Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
Quantitative analysis
Model Hyperparameters
LLaMA |
dimension |
n heads |
n layers |
Learn rate |
Batch size |
n tokens |
7B |
4096 |
32 |
32 |
3.0E - 04 |
4M |
1T |
13B |
5120 |
40 |
40 |
3.0E - 04 |
4M |
1T |
33B |
6656 |
52 |
60 |
1.5.E - 04 |
4M |
1.4T |
65B |
8192 |
64 |
80 |
1.5.E - 04 |
4M |
1.4T |
Table 1 - Summary of LLama Model Hyperparameters
Model Performance on Reasoning tasks
LLaMA |
BoolQ |
PIQA |
SIQA |
HellaSwag |
WinoGrande |
ARC - e |
ARC - c |
OBQA |
COPA |
7B |
76.5 |
79.8 |
48.9 |
76.1 |
70.1 |
76.7 |
47.6 |
57.2 |
93 |
13B |
78.1 |
80.1 |
50.4 |
79.2 |
73 |
78.1 |
52.7 |
56.4 |
94 |
33B |
83.1 |
82.3 |
50.4 |
82.8 |
76 |
81.4 |
57.8 |
58.6 |
92 |
65B |
85.3 |
82.8 |
52.3 |
84.2 |
77 |
81.5 |
56 |
60.2 |
94 |
Table 2 - Summary of LLama Model Performance on Reasoning tasks
Model Bias Summary
No |
Category |
FAIR LLM |
1 |
Gender |
70.6 |
2 |
Religion |
79 |
3 |
Race/Color |
57 |
4 |
Sexual orientation |
81 |
5 |
Age |
70.1 |
6 |
Nationality |
64.2 |
7 |
Disability |
66.7 |
8 |
Physical appearance |
77.8 |
9 |
Socioeconomic status |
71.5 |
|
LLaMA Average |
66.6 |
Table 3 - Summary bias of our model output
Ethical considerations
Data
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. The model is expected to exhibit such biases from the training data.
Human life
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
Mitigations
The data from the Web was filtered based on its proximity to Wikipedia text and references. For this, a Kneser - Ney language model and a fastText linear classifier were used.
Risks and harms
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. This model is not expected to be an exception in this regard.
Use cases
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
📄 License
The model is under a non - commercial bespoke license. For more details, please see the LICENSE file.