đ Google's T5 for Closed Book Question Answering
This project utilizes Google's T5 model for Closed Book Question Answering, pre - trained on multiple datasets and fine - tuned on Trivia QA, achieving competitive results.
đ Quick Start
This model is designed for Closed Book Question Answering. It was pre - trained on C4 using T5's denoising objective, then additionally pre - trained on Wikipedia with REALM's salient span masking objective, and finally fine - tuned on Trivia QA (TQA).
Note: The model was fine - tuned on 100% of the train splits of Trivia QA (TQA) for 10 steps.
Other community Checkpoints can be found here.
The related paper is How Much Knowledge Can You Pack Into the Parameters of a Language Model?, and the authors are Adam Roberts, Colin Raffel, Noam Shazeer.
⨠Features
- Multi - stage Pre - training: The model undergoes pre - training on different datasets with various objectives, enhancing its knowledge storage and retrieval capabilities.
- Fine - tuning on Trivia QA: Fine - tuning on Trivia QA (TQA) makes the model more suitable for question - answering tasks.
đ Documentation
Results on Trivia QA - Test Set
Id |
link |
Exact Match |
T5-11b |
https://huggingface.co/google/t5-large-ssm-tqa |
60.5 |
T5-xxl |
https://huggingface.co/google/t5-xxl-ssm-tqa |
61.6 |
Abstract
It has recently been observed that neural language models trained on unstructured text can implicitly store and retrieve knowledge using natural language queries. In this short paper, we measure the practical utility of this approach by fine - tuning pre - trained models to answer questions without access to any external context or knowledge. We show that this approach scales with model size and performs competitively with open - domain systems that explicitly retrieve answers from an external knowledge source when answering questions. To facilitate reproducibility and future work, we release our code and trained models at https://goo.gle/t5-cbqa.

đģ Usage Examples
Basic Usage
The model can be used as follows for closed book question answering:
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
t5_qa_model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-xxl-ssm-tqa")
t5_tok = AutoTokenizer.from_pretrained("google/t5-xxl-ssm-tqa")
input_ids = t5_tok("When was Franklin D. Roosevelt born?", return_tensors="pt").input_ids
gen_output = t5_qa_model.generate(input_ids)[0]
print(t5_tok.decode(gen_output, skip_special_tokens=True))
đ License
This project is licensed under the apache-2.0
license.