Bart Base Squad Qg
This is a question generation model fine-tuned on the BART-base architecture, specifically designed to generate relevant questions from given text and answers.
Downloads 57
Release Time : 3/2/2022
Model Overview
This model is fine-tuned on facebook/bart-base for question generation tasks on the SQuAD dataset, capable of generating relevant questions based on highlighted answers in the text.
Model Features
High-Quality Question Generation
Fine-tuned on the SQuAD dataset, capable of generating questions highly relevant to the context and answers.
Multi-Metric Evaluation
Provides evaluation results for multiple metrics including BLEU, METEOR, and ROUGE-L.
Out-of-Domain Adaptability
Evaluated on multiple out-of-domain datasets, demonstrating strong generalization capabilities.
Model Capabilities
Text Generation
Question Generation
Natural Language Processing
Use Cases
Education
Automated Reading Comprehension Question Generation
Automatically generates reading comprehension questions based on textbook content.
Generated questions are highly relevant to the original text.
Content Creation
Article-Specific Question Generation
Generates discussion questions for news articles.
Helps readers better understand and reflect on the article content.
đ lmqg/bart-base-squad-qg
Model Card
This model is a fine - tuned version of facebook/bart-base for the question generation task on the lmqg/qg_squad (dataset_name: default) via lmqg
. It aims to generate high - quality questions from given text, which is useful in various natural language processing applications.
đ Quick Start
Overview
Property | Details |
---|---|
Language model | facebook/bart-base |
Language | en |
Training Data | lmqg/qg_squad (default) |
Online Demo | https://autoqg.net/ |
Repository | https://github.com/asahi417/lm-question-generation |
Paper | https://arxiv.org/abs/2210.03992 |
đģ Usage Examples
Basic Usage
# With `lmqg`
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="lmqg/bart-base-squad-qg")
# model prediction
questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner")
Advanced Usage
# With `transformers`
from transformers import pipeline
pipe = pipeline("text2text-generation", "lmqg/bart-base-squad-qg")
output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
đ Documentation
Metrics
The model is evaluated using the following metrics:
- bleu4
- meteor
- rouge - l
- bertscore
- moverscore
Widget Examples
- Question Generation Example 1:
- Text: "
Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records."
- Text: "
- Question Generation Example 2:
- Text: "Beyonce further expanded her acting career, starring as blues singer
Etta James in the 2008 musical biopic, Cadillac Records."
- Text: "Beyonce further expanded her acting career, starring as blues singer
- Question Generation Example 3:
- Text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic,
Cadillac Records ."
- Text: "Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic,
Model Index
The model lmqg/bart-base-squad-qg
has the following evaluation results:
On lmqg/qg_squad
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 24.68 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 52.66 |
METEOR (Question Generation) | meteor_question_generation | 26.05 |
BERTScore (Question Generation) | bertscore_question_generation | 90.87 |
MoverScore (Question Generation) | moverscore_question_generation | 64.47 |
QAAlignedF1Score - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_f1_score_bertscore_question_answer_generation_with_gold_answer_gold_answer | 95.49 |
QAAlignedRecall - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_recall_bertscore_question_answer_generation_with_gold_answer_gold_answer | 95.44 |
QAAlignedPrecision - BERTScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_precision_bertscore_question_answer_generation_with_gold_answer_gold_answer | 95.55 |
QAAlignedF1Score - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_f1_score_moverscore_question_answer_generation_with_gold_answer_gold_answer | 70.38 |
QAAlignedRecall - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_recall_moverscore_question_answer_generation_with_gold_answer_gold_answer | 70.1 |
QAAlignedPrecision - MoverScore (Question & Answer Generation (with Gold Answer)) [Gold Answer] | qa_aligned_precision_moverscore_question_answer_generation_with_gold_answer_gold_answer | 70.67 |
QAAlignedF1Score - BERTScore (Question & Answer Generation) [Gold Answer] | qa_aligned_f1_score_bertscore_question_answer_generation_gold_answer | 92.84 |
QAAlignedRecall - BERTScore (Question & Answer Generation) [Gold Answer] | qa_aligned_recall_bertscore_question_answer_generation_gold_answer | 92.95 |
QAAlignedPrecision - BERTScore (Question & Answer Generation) [Gold Answer] | qa_aligned_precision_bertscore_question_answer_generation_gold_answer | 92.75 |
QAAlignedF1Score - MoverScore (Question & Answer Generation) [Gold Answer] | qa_aligned_f1_score_moverscore_question_answer_generation_gold_answer | 64.24 |
QAAlignedRecall - MoverScore (Question & Answer Generation) [Gold Answer] | qa_aligned_recall_moverscore_question_answer_generation_gold_answer | 64.11 |
QAAlignedPrecision - MoverScore (Question & Answer Generation) [Gold Answer] | qa_aligned_precision_moverscore_question_answer_generation_gold_answer | 64.46 |
On lmqg/qg_squadshifts
(amazon)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.05824165264328302 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.23816054441894524 |
METEOR (Question Generation) | meteor_question_generation | 0.2126541577267873 |
BERTScore (Question Generation) | bertscore_question_generation | 0.9049284884636415 |
MoverScore (Question Generation) | moverscore_question_generation | 0.6026811246610306 |
On lmqg/qg_squadshifts
(new_wiki)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.10732253983426589 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.2843539251435107 |
METEOR (Question Generation) | meteor_question_generation | 0.26233713078026283 |
BERTScore (Question Generation) | bertscore_question_generation | 0.9307303692241476 |
MoverScore (Question Generation) | moverscore_question_generation | 0.656720781293701 |
On lmqg/qg_squadshifts
(nyt)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.07645313983751752 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.2390325229516282 |
METEOR (Question Generation) | meteor_question_generation | 0.244330483594333 |
BERTScore (Question Generation) | bertscore_question_generation | 0.9235989114144583 |
MoverScore (Question Generation) | moverscore_question_generation | 0.6368628469746445 |
On lmqg/qg_squadshifts
(reddit)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.053789810023704955 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.2141155595451475 |
METEOR (Question Generation) | meteor_question_generation | 0.20395821936787215 |
BERTScore (Question Generation) | bertscore_question_generation | 0.905714302466044 |
MoverScore (Question Generation) | moverscore_question_generation | 0.6013927660089013 |
On lmqg/qg_subjqa
(books)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 1.4952813458186383e - 10 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.10769136267285535 |
METEOR (Question Generation) | meteor_question_generation | 0.11520101781020654 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8774975922095214 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5520873074919223 |
On lmqg/qg_subjqa
(electronics)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 1.3766381900873328e - 06 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.14287460464803423 |
METEOR (Question Generation) | meteor_question_generation | 0.14866637711177003 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8759880110997111 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5607199201429516 |
On lmqg/qg_subjqa
(grocery)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.006003840641121225 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.1248840598199836 |
METEOR (Question Generation) | meteor_question_generation | 0.1553374628831024 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8737966828346252 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5662545638649026 |
On lmqg/qg_subjqa
(movies)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.0108258720771249 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.1389815289507374 |
METEOR (Question Generation) | meteor_question_generation | 0.12855849168399078 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8773110466344016 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5555164603510797 |
On lmqg/qg_subjqa
(restaurants)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 1.7873892359263582e - 10 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.12160976589996819 |
METEOR (Question Generation) | meteor_question_generation | 0.1146979295288459 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8771339668070569 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5490739019998478 |
On lmqg/qg_subjqa
(tripadvisor)
Metric Name | Type | Value |
---|---|---|
BLEU4 (Question Generation) | bleu4_question_generation | 0.010174680918435602 |
ROUGE - L (Question Generation) | rouge_l_question_generation | 0.1341425139885307 |
METEOR (Question Generation) | meteor_question_generation | 0.1391725168440533 |
BERTScore (Question Generation) | bertscore_question_generation | 0.8877592491739579 |
MoverScore (Question Generation) | moverscore_question_generation | 0.5590591813016728 |
đ License
This model is licensed under cc - by - 4.0
.
Distilbert Base Cased Distilled Squad
Apache-2.0
DistilBERT is a lightweight distilled version of BERT, with 40% fewer parameters, 60% faster speed, while retaining over 95% performance. This model is a question-answering specialized version fine-tuned on the SQuAD v1.1 dataset.
Question Answering System English
D
distilbert
220.76k
244
Distilbert Base Uncased Distilled Squad
Apache-2.0
DistilBERT is a lightweight distilled version of BERT, with 40% fewer parameters and 60% faster speed, maintaining over 95% of BERT's performance on the GLUE benchmark. This model is fine-tuned specifically for question answering tasks.
Question Answering System
Transformers English

D
distilbert
154.39k
115
Tapas Large Finetuned Wtq
Apache-2.0
TAPAS is a table question answering model based on the BERT architecture, pre-trained in a self-supervised manner on Wikipedia table data, supporting natural language question answering on table content
Question Answering System
Transformers English

T
google
124.85k
141
T5 Base Question Generator
A question generation model based on t5-base. Input the answer and context, and output the corresponding question.
Question Answering System
Transformers

T
iarfmoose
122.74k
57
Bert Base Cased Qa Evaluator
A BERT-base-cased based QA pair evaluation model for determining semantic relevance between questions and answers
Question Answering System
B
iarfmoose
122.54k
9
Tiny Doc Qa Vision Encoder Decoder
MIT
A document Q&A model based on the MIT License, primarily for testing purposes.
Question Answering System
Transformers

T
fxmarty
41.08k
16
Dpr Question Encoder Single Nq Base
DPR (Dense Passage Retrieval) is a tool and model for open-domain question answering research. This model is a BERT-based question encoder trained on the Natural Questions (NQ) dataset.
Question Answering System
Transformers English

D
facebook
32.90k
30
Mobilebert Uncased Squad V2
MIT
MobileBERT is a lightweight version of BERT_LARGE, fine-tuned on the SQuAD2.0 dataset for question answering systems.
Question Answering System
Transformers English

M
csarron
29.11k
7
Tapas Base Finetuned Wtq
Apache-2.0
TAPAS is a Transformer-based table question answering model, pre-trained on Wikipedia table data through self-supervised learning and fine-tuned on datasets like WTQ.
Question Answering System
Transformers English

T
google
23.03k
217
Dpr Question Encoder Multiset Base
BERT-based Dense Passage Retrieval (DPR) question encoder for open-domain QA research, trained on multiple QA datasets
Question Answering System
Transformers English

D
facebook
17.51k
4
Featured Recommended AI Models
Š 2025AIbase