🚀 SKEP-Roberta
SKEP-Roberta 是一個用於情感分析的預訓練模型,它結合了情感知識增強預訓練技術,能有效提升情感分析的效果。
🚀 快速開始
安裝依賴
你可以使用以下命令安裝所需的庫:
pip install transformers torch
代碼示例
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Yaxin/roberta-large-ernie2-skep-en")
model = AutoModel.from_pretrained("Yaxin/roberta-large-ernie2-skep-en")
高級用法
import torch
from transformers import RobertaTokenizer, RobertaForMaskedLM
tokenizer = RobertaTokenizer.from_pretrained('Yaxin/roberta-large-ernie2-skep-en')
input_tx = "<s> He like play with student, so he became a <mask> after graduation </s>"
tokenized_text = tokenizer.tokenize(input_tx)
indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text)
tokens_tensor = torch.tensor([indexed_tokens])
segments_tensors = torch.tensor([[0] * len(tokenized_text)])
model = RobertaForMaskedLM.from_pretrained('Yaxin/roberta-large-ernie2-skep-en')
model.eval()
with torch.no_grad():
outputs = model(tokens_tensor, token_type_ids=segments_tensors)
predictions = outputs[0]
predicted_index = [torch.argmax(predictions[0, i]).item() for i in range(0, (len(tokenized_text) - 1))]
predicted_token = [tokenizer.convert_ids_to_tokens([predicted_index[x]])[0] for x in
range(1, (len(tokenized_text) - 1))]
print('Predicted token is:', predicted_token)
✨ 主要特性
SKEP(SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis)由百度在 2020 年提出,它提出了用於情感分析的情感知識增強預訓練方法。通過設計情感掩碼和三個情感預訓練目標,將各種類型的知識融入到預訓練模型中。
更多細節請參考:https://aclanthology.org/2020.acl-main.374.pdf
📦 模型信息
屬性 |
詳情 |
模型名稱 |
skep-roberta-large |
語言 |
英文 |
模型結構 |
層數:24,隱藏層維度:1024,注意力頭數:24 |
此發佈的 PyTorch 模型是從官方發佈的 PaddlePaddle SKEP 模型轉換而來,並進行了一系列實驗以驗證轉換的準確性。
- 官方 PaddlePaddle SKEP 倉庫:
- https://github.com/PaddlePaddle/PaddleNLP/blob/develop/paddlenlp/transformers/skep
- https://github.com/baidu/Senta
- PyTorch 轉換倉庫:尚未發佈
📄 引用
如果你使用了該模型,請引用以下論文:
@article{tian2020skep,
title={SKEP: Sentiment knowledge enhanced pre-training for sentiment analysis},
author={Tian, Hao and Gao, Can and Xiao, Xinyan and Liu, Hao and He, Bolei and Wu, Hua and Wang, Haifeng and Wu, Feng},
journal={arXiv preprint arXiv:2005.05635},
year={2020}
}
參考鏈接:https://github.com/nghuyong/ERNIE-Pytorch