Model Overview
Model Features
Model Capabilities
Use Cases
đ T0* Model: Zero-Shot Task Generalization in NLP
T0* is a series of encoder - decoder models that demonstrate zero - shot task generalization on English natural language prompts. It outperforms GPT - 3 on many tasks while being 16x smaller. These models are trained on a large set of different tasks specified in natural language prompts, enabling them to handle unseen tasks described in natural language.
đ Quick Start
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. For example, you can ask "Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", and the model will hopefully generate "Positive".
⨠Features
- Zero - shot generalization: T0* shows excellent zero - shot task generalization on English natural language prompts, outperforming GPT - 3 on many tasks with a much smaller size.
- Multitask learning: Trained on a large set of different tasks specified in natural language prompts, covering various NLP tasks.
đĻ Installation
There is no specific installation instruction provided in the original README. However, to use the model in Python, you need to have the transformers
library installed. You can install it using pip
:
pip install transformers
đģ Usage Examples
Basic Usage
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0pp")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0pp")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
Advanced Usage
If you want to use another checkpoint, please replace the path in AutoTokenizer
and AutoModelForSeq2SeqLM
. For example, to use the T0_3B
checkpoint:
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("bigscience/T0_3B")
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/T0_3B")
inputs = tokenizer.encode("Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy", return_tensors="pt")
outputs = model.generate(inputs)
print(tokenizer.decode(outputs[0]))
đ Documentation
Model Description
T0* shows zero - shot task generalization on English natural language prompts, outperforming GPT - 3 on many tasks, while being 16x smaller. It is a series of encoder - decoder models trained on a large set of different tasks specified in natural language prompts. We convert numerous English supervised datasets into prompts, each with multiple templates using varying formulations. These prompted datasets allow for benchmarking the ability of a model to perform completely unseen tasks specified in natural language. To obtain T0*, we fine - tune a pretrained language model on this multitask mixture covering many different NLP tasks.
Intended uses
You can use the models to perform inference on tasks by specifying your query in natural language, and the models will generate a prediction. A few other examples that you can try:
- A is the son's of B's uncle. What is the family relationship between A and B?
- Question A: How is air traffic controlled?
Question B: How do you become an air traffic controller?
Pick one: these questions are duplicates or not duplicates. - Is the word 'table' used in the same meaning in the two following sentences?
Sentence A: you can leave the books on the table over there.
Sentence B: the tables in this book are very hard to read. - Max: Know any good websites to buy clothes from?
Payton: Sure :) LINK 1, LINK 2, LINK 3
Max: That's a lot of them!
Payton: Yeah, but they have different things so I usually buy things from 2 or 3 of them.
Max: I'll check them out. Thanks.
Who or what are Payton and Max referring to when they say 'them'? - On a shelf, there are five books: a gray book, a red book, a purple book, a blue book, and a black book.
The red book is to the right of the gray book. The black book is to the left of the blue book. The blue book is to the left of the gray book. The purple book is the second from the right.
Which book is the leftmost book? - Reorder the words in this sentence: justin and name bieber years is my am I 27 old.
Training procedure
T0* models are based on [T5](https://huggingface.co/google/t5 - v1_1 - large), a Transformer - based encoder - decoder language model pre - trained with a masked language modeling - style objective on C4. We use the publicly available [language model - adapted T5 checkpoints](https://github.com/google - research/text - to - text - transfer - transformer/blob/main/released_checkpoints.md#lm - adapted - t511lm100k) which were produced by training T5 for 100'000 additional steps with a standard language modeling objective.
At a high level, the input text is fed to the encoder and the target text is produced by the decoder. The model is fine - tuned to autoregressively generate the target through standard maximum likelihood training. It is never trained to generate the input.
Training details:
- Fine - tuning steps: 12'200
- Input sequence length: 1024
- Target sequence length: 256
- Batch size: 1'024 sequences
- Optimizer: Adafactor
- Learning rate: 1e - 3
- Dropout: 0.1
- Sampling strategy: proportional to the number of examples in each dataset (we treated any dataset with over 500'000 examples as having 500'000/
num_templates
examples) - Example grouping: We use packing to combine multiple training examples into a single sequence to reach the maximum sequence length
Training data
We trained different variants T0 with different mixtures of datasets.
Model | Training datasets |
---|---|
T0 | - Multiple - Choice QA: CommonsenseQA, DREAM, QUAIL, QuaRTz, Social IQA, WiQA, Cosmos, QASC, Quarel, SciQ, Wiki Hop - Extractive QA: Adversarial QA, Quoref, DuoRC, ROPES - Closed - Book QA: Hotpot QA*, Wiki QA - Structure - To - Text: Common Gen, Wiki Bio - Sentiment: Amazon, App Reviews, IMDB, Rotten Tomatoes, Yelp - Summarization: CNN Daily Mail, Gigaword, MultiNews, SamSum, XSum - Topic Classification: AG News, DBPedia, TREC - Paraphrase Identification: MRPC, PAWS, QQP |
T0p | Same as T0 with additional datasets from GPT - 3's evaluation suite: - Multiple - Choice QA: ARC, OpenBook QA, PiQA, RACE, HellaSwag - Extractive QA: SQuAD v2 - Closed - Book QA: Trivia QA, Web Questions |
T0pp | Same as T0p with a few additional datasets from SuperGLUE (excluding NLI sets): - BoolQ - COPA - MultiRC - ReCoRD - WiC - WSC |
T0_single_prompt | Same as T0 but only one prompt per training dataset |
T0_original_task_only | Same as T0 but only original tasks templates |
T0_3B | Same as T0 but starting from a T5 - LM XL (3B parameters) pre - trained model |
For reproducibility, we release the data we used for training (and evaluation) in the P3 dataset. Prompts examples can be found on the dataset page.
*: We recast Hotpot QA as closed - book QA due to long input sequence length.
Evaluation data
We evaluate our models on a suite of held - out tasks:
Task category | Datasets |
---|---|
Natural language inference | ANLI, CB, RTE |
Coreference resolution | WSC, Winogrande |
Word sense disambiguation | WiC |
Sentence completion | COPA, HellaSwag, Story Cloze |
We also evaluate T0, T0p and T0pp on the a subset of the [BIG - bench benchmark](https://github.com/google/BIG - bench):
- Code description task
- Conceptual combinations
- Hindu knowledge json
- Known unknowns
- Language identification
- Logic grid puzzle task
- Logical deduction
- Common misconceptions
- Movie dialog same or different
- Novel concepts
- Strategyqa
- Formal fallacies syllogisms negation
- VitaminC
- Winowhy multiple choice
đ§ Technical Details
- Model architecture: Based on [T5](https://huggingface.co/google/t5 - v1_1 - large), a Transformer - based encoder - decoder language model.
- Training objective: Fine - tuned through standard maximum likelihood training to autoregressively generate the target text.
- Data handling: Different variants of T0 are trained on different mixtures of datasets, and we use packing to combine multiple training examples into a single sequence.
đ License
The model is licensed under apache - 2.0.
â ī¸ Important Note
The model was trained with bf16 activations. As such, we highly discourage running inference with fp16. fp32 or bf16 should be preferred.
â ī¸ Important Note
- The models of the T0* series are quite large (3B or 11B parameters). Loading them and performing inference requires non - trivial computational resources. When using multiple GPUs, it is possible to use .parallelize().
- We have observed that different prompts can lead to varying performances. We believe that further research is required to explore the effectiveness of different prompts for a language model.
- Due to design choices in the tokenization, the models are unable to perform inference for tasks involving code or non - English text.
â ī¸ Important Note
Even if we took deliberate decisions to exclude datasets with potentially harmful content from the fine - tuning, the models trained are not bias - free. Based on a few experimentations, T0++ can generate answers that could be categorized as conspiracist, biased, offensive or over - emphasizing sexual topics.

