🚀 Transformers for Sentiment Analysis
This library, transformers
, is designed for text2text-generation, specifically focusing on sentiment analysis tasks. It is fine-tuned from IDEA-CCNL/Randeng-T5-784M-MultiTask-Chinese on multiple Chinese and English sentiment analysis datasets.
📦 Installation
No specific installation steps are provided in the original document, so this section is skipped.
💻 Usage Examples
Basic Usage
The output format of the model is as follows:
'target1 | opinion1 | aspect1 | polarity1 & target2 | opinion2 | aspect2 | polarity2 ......'
You can use the yuyijiong/quad_match_score
evaluation metric to evaluate the model:
import evaluate
module = evaluate.load("yuyijiong/quad_match_score")
predictions=["food | good | food#taste | pos"]
references=["food | good | food#taste | pos & service | bad | service#general | neg"]
result=module.compute(predictions=predictions, references=references)
print(result)
Advanced Usage
The model supports the following sentiment analysis tasks:
["quadruples (target | opinion | aspect | polarity)",
"pairs (target | opinion)",
"triples (target | opinion | aspect)",
"triples (target | opinion | polarity)",
"triples (target | aspect | polarity)",
"pairs (aspect | polarity)",
"pairs (opinion | polarity)",
"single (polarity)"]
You can add additional conditions to control the generation of answers, for example:
Answer style control: You can specify whether the extracted opinions should be a whole sentence or reduced to a few words.
- (Keep opinions short)
- (Opinions can be longer)
Aspect-specific sentiment analysis: You can perform sentiment analysis on specified aspects.
- (Aspect options: product/logistics/merchant/platform)
The sentiment target can be null
, indicating that it is not explicitly given in the text. You can allow the model to automatically guess the null
target.
Here is an example of using the model:
import torch
from transformers import T5Tokenizer, AutoModelForSeq2SeqLM
tokenizer = T5Tokenizer.from_pretrained("yuyijiong/Randeng-T5-large-sentiment-analysis-Chinese")
model = AutoModelForSeq2SeqLM.from_pretrained("yuyijiong/Randeng-T5-large-sentiment-analysis-Chinese", device_map="auto")
generation_config=GenerationConfig.from_pretrained("yuyijiong/Randeng-T5-large-sentiment-analysis-Chinese")
text = 'Sentiment quadruples (target | opinion | aspect | polarity) extraction task (Opinions can be longer): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]'
input_ids = tokenizer(text,return_tensors="pt", padding=True)['input_ids'].cuda(0)
with torch.no_grad():
output = model.generate(input_ids=input_ids,generation_config=generation_config)
output_str = tokenizer.batch_decode(output, skip_special_tokens=True)
print(output_str)
Here are some usage examples:
Q: Sentiment quadruples (target | opinion | aspect | polarity) extraction task (Opinions can be longer): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: product | big | product#size | positive & taste | good | product#taste | positive & null (some) | have rotten holes deliberately sealed with mud. This is not good | product#freshness | negative
Q: Sentiment quadruples (target | opinion | aspect | polarity) extraction task (Opinions can be longer, fill in null): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: product | big | product#size | positive & taste | good | product#taste | positive & null (peanuts) | have rotten holes deliberately sealed with mud. This is not good | product#freshness | negative
Q: Sentiment quadruples (target | opinion | aspect | polarity) extraction task (Keep opinions short): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: product | big | product#size | positive & taste | good | product#taste | positive
Q: Sentiment triples (target | opinion | polarity) extraction task (Opinions can be longer, fill in null): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: product | big | positive & taste | good | positive & null (peanuts) | have rotten holes deliberately sealed with mud. This is not good | negative
Q: Judge the sentiment polarity of the following review: [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: neutral
Q: Sentiment pairs (aspect | polarity) extraction task (Aspect options: price#cost-effectiveness/price#discount/price#level/food#appearance/food#portion/food#taste/food#recommendation): [The product is big and tastes good, but some of them are bad or have rotten holes deliberately sealed with mud. This is not good.]
A: food#portion | positive & food#taste | neutral
Q: Sentiment quadruples (target | opinion | aspect | polarity) extraction task : [The hot dogs are good , yes , but the reason to get over here is the fantastic pork croquette sandwich , perfect on its supermarket squishy bun .]
A: hot dogs | good | food#quality | pos & pork croquette sandwich | fantastic | food#quality | pos & bun | perfect | food#quality | pos
📚 Documentation
Datasets
- Yaxin/SemEval2016Task5NLTK
Metrics
- yuyijiong/quad_match_score
📄 License
No license information is provided in the original document, so this section is skipped.