🚀 OmniTab
OmniTab是一個基於表格的問答模型,該模型在論文OmniTab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering中被提出。原始的GitHub倉庫為 https://github.com/jzbjyb/OmniTab。它旨在解決基於表格的問答任務,利用自然和合成數據進行預訓練,以實現少樣本學習下的高效問答。
🚀 快速開始
neulab/omnitab-large-finetuned-wtq
(基於BART架構)以 neulab/omnitab-large
為初始模型,並在 WikiTableQuestions 數據集上進行了微調。
💻 使用示例
基礎用法
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
import pandas as pd
tokenizer = AutoTokenizer.from_pretrained("neulab/omnitab-large-finetuned-wtq")
model = AutoModelForSeq2SeqLM.from_pretrained("neulab/omnitab-large-finetuned-wtq")
data = {
"year": [1896, 1900, 1904, 2004, 2008, 2012],
"city": ["athens", "paris", "st. louis", "athens", "beijing", "london"]
}
table = pd.DataFrame.from_dict(data)
query = "In which year did beijing host the Olympic Games?"
encoding = tokenizer(table=table, query=query, return_tensors="pt")
outputs = model.generate(**encoding)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True))
📚 詳細文檔
引用信息
@inproceedings{jiang-etal-2022-omnitab,
title = "{O}mni{T}ab: Pretraining with Natural and Synthetic Data for Few-shot Table-based Question Answering",
author = "Jiang, Zhengbao and Mao, Yi and He, Pengcheng and Neubig, Graham and Chen, Weizhu",
booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
month = jul,
year = "2022",
}
模型相關信息
屬性 |
詳情 |
模型類型 |
基於表格的問答模型,基於BART架構 |
訓練數據 |
WikiTableQuestions數據集 |