モデル概要
モデル特徴
モデル能力
使用事例
🚀 NLP Taxonomy Classifier
このモデルは、NLPタクソノミーに含まれる概念に基づいて、NLP関連の研究論文を分類するために、BERTベースの言語モデルをファインチューニングしたものです。 これはマルチラベル分類器であり、NLPタクソノミーのすべてのレベルから概念を予測できます。 モデルが下位概念を識別した場合、その下位概念とNLPタクソノミーにおける上位概念の両方を予測することを学習しています。 このモデルは、ACL Anthology、arXiv cs.CLカテゴリ、およびScopusから収集された178,521件の科学論文の弱ラベル付きデータセットでファインチューニングされています。 ファインチューニングの前に、モデルはallenai/specter2_baseの重みで初期化されます。
📄 論文: Exploring the Landscape of Natural Language Processing Research (RANLP 2023)
💻 GitHub: https://github.com/sebischair/Exploring-NLP-Research
💾 データ: https://huggingface.co/datasets/TimSchopf/nlp_taxonomy_data
🚀 クイックスタート
このモデルは、NLP関連の研究論文を分類するために、BERTベースの言語モデルをファインチューニングしたものです。以下では、このモデルを使った分類の方法を説明します。
✨ 主な機能
- マルチラベル分類器で、NLPタクソノミーのすべてのレベルから概念を予測できます。
- 下位概念を識別した場合、その下位概念とNLPタクソノミーにおける上位概念の両方を予測することができます。
📦 インストール
このREADMEには具体的なインストール手順が記載されていないため、このセクションを省略します。
💻 使用例
基本的な使用法
モデルを直接読み込んで予測を行う方法です。
from typing import List
import torch
from torch.utils.data import DataLoader
from transformers import BertForSequenceClassification, AutoTokenizer
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
model = BertForSequenceClassification.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
# prepare data
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# concatenate title and abstract with [SEP] token
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
def predict_nlp_concepts(model, tokenizer, texts: List[str], batch_size=8, device=None, shuffle_data=False):
"""
helper function for predicting NLP concepts of scientific papers
"""
# tokenize texts
def tokenize_dataset(sentences, tokenizer):
sentences_num = len(sentences)
dataset = []
for i in range(sentences_num):
sentence = tokenizer(sentences[i], padding="max_length", truncation=True, return_tensors='pt', max_length=model.config.max_position_embeddings)
# get input_ids, token_type_ids, and attention_mask
input_ids = sentence['input_ids'][0]
token_type_ids = sentence['token_type_ids'][0]
attention_mask = sentence['attention_mask'][0]
dataset.append((input_ids, token_type_ids, attention_mask))
return dataset
tokenized_data = tokenize_dataset(sentences=texts, tokenizer=tokenizer)
# get the individual input formats for the model
input_ids = torch.stack([x[0] for x in tokenized_data])
token_type_ids = torch.stack([x[1] for x in tokenized_data])
attention_mask_ids = torch.stack([x[2].to(torch.float) for x in tokenized_data])
# convert input to DataLoader
input_dataset = []
for i in range(len(input_ids)):
data = {}
data['input_ids'] = input_ids[i]
data['token_type_ids'] = token_type_ids[i]
data['attention_mask'] = attention_mask_ids[i]
input_dataset.append(data)
dataloader = DataLoader(input_dataset, shuffle=shuffle_data, batch_size=batch_size)
# predict data
if not device:
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()
y_pred = torch.tensor([]).to(device)
for batch in dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
input_ids_batch = batch['input_ids']
token_type_ids_batch = batch['token_type_ids']
mask_ids_batch = batch['attention_mask']
with torch.no_grad():
outputs = model(input_ids=input_ids_batch, attention_mask=mask_ids_batch, token_type_ids=token_type_ids_batch)
logits = outputs.logits
predictions = torch.round(torch.sigmoid(logits))
y_pred = torch.cat([y_pred,predictions])
# get prediction class names
prediction_indices_list = []
for prediction in y_pred:
prediction_indices_list.append((prediction == torch.max(prediction)).nonzero(as_tuple=True)[0])
prediction_class_names_list = []
for prediction_indices in prediction_indices_list:
prediction_class_names = []
for prediction_idx in prediction_indices:
prediction_class_names.append(model.config.id2label[int(prediction_idx)])
prediction_class_names_list.append(prediction_class_names)
return y_pred, prediction_class_names_list
# predict concepts of NLP papers
numerical_predictions, class_name_predictions = predict_nlp_concepts(model=model, tokenizer=tokenizer, texts=title_abs)
高度な使用法
パイプラインを使用して予測を行う方法です。
from transformers import pipeline
pipe = pipeline("text-classification", model="TimSchopf/nlp_taxonomy_classifier")
# prepare data
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# concatenate title and abstract with [SEP] token
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
pipe(title_abs, return_all_scores=True)
📚 ドキュメント
NLP Taxonomy
NLPタクソノミーの機械可読バージョンは、OWLファイルとして私たちのコードリポジトリにあります: https://github.com/sebischair/Exploring-NLP-Research/blob/main/NLP-Taxonomy.owl
https://aclanthology.org/2024.acl-demos.13 の研究では、このタクソノミーをNLPの研究分野の大規模な階層に拡張し、機械可読形式のOWLファイルとして以下の場所で公開しています: https://github.com/NLP-Knowledge-Graph/NLP-KG-WebApp
評価結果
このモデルは、828件のEMNLP 2022論文の手動ラベル付きテストセットで評価されました。以下は、3回の異なるトレーニング実行において、NLPタクソノミーに従って論文を分類した平均評価結果です。クラスの分布が非常に不均衡であるため、マイクロスコアを報告しています。
- F1: 93.21
- 再現率: 93.99
- 適合率: 92.46
🔧 技術詳細
このREADMEには具体的な技術詳細が記載されていないため、このセクションを省略します。
📄 ライセンス
BSD 3条項ライセンス
引用情報
学術論文や学位論文で私たちの研究を引用する場合は、以下のBibTeXエントリを使用してください。
@inproceedings{schopf-etal-2023-exploring,
title = "Exploring the Landscape of Natural Language Processing Research",
author = "Schopf, Tim and
Arabi, Karim and
Matthes, Florian",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.111",
pages = "1034--1045",
abstract = "As an efficient approach to understand, generate, and process natural language texts, research in natural language processing (NLP) has exhibited a rapid spread and wide adoption in recent years. Given the increasing research work in this area, several NLP-related approaches have been surveyed in the research community. However, a comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent. Contributing to closing this gap, we have systematically classified and analyzed research papers in the ACL Anthology. As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work.",
}








