模型概述
模型特點
模型能力
使用案例
🚀 NLP分類法分類器
這是一個基於BERT的微調語言模型,用於根據NLP分類法中包含的概念對NLP相關的研究論文進行分類。它是一個多標籤分類器,可以預測NLP分類法各級別的概念。如果模型識別出一個較低級別的概念,它也會學習預測該較低級別概念及其在NLP分類法中的上位概念。該模型在來自ACL文集、arXiv cs.CL類別和Scopus的178,521篇科學論文的弱標籤數據集上進行了微調。在微調之前,模型使用allenai/specter2_base的權重進行初始化。
📄 論文:探索自然語言處理研究的格局 (RANLP 2023)
💻 GitHub:https://github.com/sebischair/Exploring-NLP-Research
💾 數據:https://huggingface.co/datasets/TimSchopf/nlp_taxonomy_data
🚀 快速開始
模型信息
屬性 | 詳情 |
---|---|
模型類型 | 基於BERT的微調語言模型,用於NLP相關研究論文的多標籤分類 |
訓練數據 | 來自ACL文集、arXiv cs.CL類別和Scopus的178,521篇科學論文的弱標籤數據集 |
評估指標 | F1、Precision、Recall |
庫名稱 | transformers |
任務類型 | 文本分類 |
標籤 | 科學、學術 |
數據集 | TimSchopf/nlp_taxonomy_data |
模型使用
直接加載模型進行預測
from typing import List
import torch
from torch.utils.data import DataLoader
from transformers import BertForSequenceClassification, AutoTokenizer
# load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
model = BertForSequenceClassification.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
# prepare data
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# concatenate title and abstract with [SEP] token
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
def predict_nlp_concepts(model, tokenizer, texts: List[str], batch_size=8, device=None, shuffle_data=False):
"""
helper function for predicting NLP concepts of scientific papers
"""
# tokenize texts
def tokenize_dataset(sentences, tokenizer):
sentences_num = len(sentences)
dataset = []
for i in range(sentences_num):
sentence = tokenizer(sentences[i], padding="max_length", truncation=True, return_tensors='pt', max_length=model.config.max_position_embeddings)
# get input_ids, token_type_ids, and attention_mask
input_ids = sentence['input_ids'][0]
token_type_ids = sentence['token_type_ids'][0]
attention_mask = sentence['attention_mask'][0]
dataset.append((input_ids, token_type_ids, attention_mask))
return dataset
tokenized_data = tokenize_dataset(sentences=texts, tokenizer=tokenizer)
# get the individual input formats for the model
input_ids = torch.stack([x[0] for x in tokenized_data])
token_type_ids = torch.stack([x[1] for x in tokenized_data])
attention_mask_ids = torch.stack([x[2].to(torch.float) for x in tokenized_data])
# convert input to DataLoader
input_dataset = []
for i in range(len(input_ids)):
data = {}
data['input_ids'] = input_ids[i]
data['token_type_ids'] = token_type_ids[i]
data['attention_mask'] = attention_mask_ids[i]
input_dataset.append(data)
dataloader = DataLoader(input_dataset, shuffle=shuffle_data, batch_size=batch_size)
# predict data
if not device:
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.to(device)
model.eval()
y_pred = torch.tensor([]).to(device)
for batch in dataloader:
batch = {k: v.to(device) for k, v in batch.items()}
input_ids_batch = batch['input_ids']
token_type_ids_batch = batch['token_type_ids']
mask_ids_batch = batch['attention_mask']
with torch.no_grad():
outputs = model(input_ids=input_ids_batch, attention_mask=mask_ids_batch, token_type_ids=token_type_ids_batch)
logits = outputs.logits
predictions = torch.round(torch.sigmoid(logits))
y_pred = torch.cat([y_pred,predictions])
# get prediction class names
prediction_indices_list = []
for prediction in y_pred:
prediction_indices_list.append((prediction == torch.max(prediction)).nonzero(as_tuple=True)[0])
prediction_class_names_list = []
for prediction_indices in prediction_indices_list:
prediction_class_names = []
for prediction_idx in prediction_indices:
prediction_class_names.append(model.config.id2label[int(prediction_idx)])
prediction_class_names_list.append(prediction_class_names)
return y_pred, prediction_class_names_list
# predict concepts of NLP papers
numerical_predictions, class_name_predictions = predict_nlp_concepts(model=model, tokenizer=tokenizer, texts=title_abs)
使用管道進行預測
from transformers import pipeline
pipe = pipeline("text-classification", model="TimSchopf/nlp_taxonomy_classifier")
# prepare data
papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
{'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
# concatenate title and abstract with [SEP] token
title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
pipe(title_abs, return_all_scores=True)
📚 詳細文檔
NLP分類法
NLP分類法的機器可讀版本以OWL文件的形式在我們的代碼倉庫中提供:https://github.com/sebischair/Exploring-NLP-Research/blob/main/NLP-Taxonomy.owl
對於我們在NLP-KG方面的工作,我們將此分類法擴展為NLP研究領域的大型層次結構,並以機器可讀的OWL文件形式提供:https://github.com/NLP-Knowledge-Graph/NLP-KG-WebApp
評估結果
該模型在一個手動標註的包含828篇EMNLP 2022論文的測試集上進行了評估。以下是在三次不同訓練運行中根據NLP分類法對論文進行分類的平均評估結果。由於類別分佈非常不平衡,我們報告了微分數。
- F1值:93.21
- 召回率:93.99
- 精確率:92.46
📄 許可證
本項目採用BSD 3條款許可證。
🔖 引用信息
當在學術論文和學位論文中引用我們的工作時,請使用以下BibTeX條目:
@inproceedings{schopf-etal-2023-exploring,
title = "Exploring the Landscape of Natural Language Processing Research",
author = "Schopf, Tim and
Arabi, Karim and
Matthes, Florian",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.111",
pages = "1034--1045",
abstract = "As an efficient approach to understand, generate, and process natural language texts, research in natural language processing (NLP) has exhibited a rapid spread and wide adoption in recent years. Given the increasing research work in this area, several NLP-related approaches have been surveyed in the research community. However, a comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent. Contributing to closing this gap, we have systematically classified and analyzed research papers in the ACL Anthology. As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work.",
}








