๐ Model Card of lmqg/t5-small-squad-qg
This model is a fine - tuned version of t5-small for the question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg
. It aims to generate high - quality questions from given text, which is useful in various natural language processing applications.
๐ Quick Start
Overview
Usage
๐ป Usage Examples
Basic Usage
from lmqg import TransformersQG
model = TransformersQG(language="en", model="lmqg/t5-small-squad-qg")
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
With transformers
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/t5-small-squad-qg")
output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
๐ Documentation
Metrics
The following metrics are used to evaluate the model's performance:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
Widget Examples
- Question Generation Example 1:
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
- Question Generation Example 2:
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
- Question Generation Example 3:
- text: "generate question: Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records ."
Model Index
The model lmqg/t5-small-squad-qg
has been evaluated on multiple datasets with different tasks and metrics. Here are some of the key results:
On lmqg/qg_squad
(default)
Metric Name |
Value |
BLEU4 (Question Generation) |
24.4 |
ROUGE - L (Question Generation) |
51.43 |
METEOR (Question Generation) |
25.84 |
BERTScore (Question Generation) |
90.2 |
MoverScore (Question Generation) |
63.89 |
QAAlignedF1Score - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
95.14 |
QAAlignedRecall - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
95.09 |
QAAlignedPrecision - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
95.19 |
QAAlignedF1Score - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
69.79 |
QAAlignedRecall - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
69.51 |
QAAlignedPrecision - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] |
70.09 |
QAAlignedF1Score - BERTScore (Question & Answer Generation) [Gold Answer] |
92.26 |
QAAlignedRecall - BERTScore (Question & Answer Generation) [Gold Answer] |
92.48 |
QAAlignedPrecision - BERTScore (Question & Answer Generation) [Gold Answer] |
92.07 |
QAAlignedF1Score - MoverScore (Question & Answer Generation) [Gold Answer] |
63.83 |
QAAlignedRecall - MoverScore (Question & Answer Generation) [Gold Answer] |
63.82 |
QAAlignedPrecision - MoverScore (Question & Answer Generation) [Gold Answer] |
63.92 |
On other datasets (lmqg/qg_squadshifts
, lmqg/qg_subjqa
with different subsets)
The model has also been evaluated on various subsets of lmqg/qg_squadshifts
(e.g., amazon, new_wiki, nyt, reddit) and lmqg/qg_subjqa
(e.g., books, electronics, grocery, movies, restaurants, tripadvisor). The detailed metric values for each dataset and subset are provided in the original model card.
๐ License
This model is released under the cc - by - 4.0 license.