๐ Emobloom-7b
Emobloom-7b is part of the EmoLLMs project, the first open - source large language model series for comprehensive affective analysis with instruction - following capability.
๐ Quick Start
Emobloom-7b, a part of the EmoLLMs project, is an open - source large language model. It is finetuned based on the bloomz - 7b1 - mt foundation model and the full AAID instruction tuning data. This model can handle affective classification tasks (e.g., sentimental polarity or categorical emotions) and regression tasks (e.g., sentiment strength or emotion intensity).
โจ Features
Ethical Consideration
Recent studies have shown that LLMs may introduce potential biases, such as gender gaps. Incorrect prediction results and over - generalization also indicate the potential risks of current LLMs. Thus, there are still many challenges in applying this model to real - scenario affective analysis systems.
Models in EmoLLMs
There is a series of EmoLLMs, including:
- Emollama-7b: Finetuned based on the LLaMA2 - 7B.
- Emollama-chat-7b: Finetuned based on the LLaMA2 - chat - 7B.
- Emollama-chat-13b: Finetuned based on the LLaMA2 - chat - 13B.
- Emoopt-13b: Finetuned based on the OPT - 13B.
- Emobloom-7b: Finetuned based on the Bloomz - 7b1 - mt.
- Emot5-large: Finetuned based on the T5 - large.
- Emobart-large: Finetuned based on the bart - large.
All models are trained on the full AAID instruction tuning data.
๐ฆ Installation
You can use the Emobloom - 7b model in your Python project with the Hugging Face Transformers library. Here is a simple example of how to load the model:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('lzw1008/Emobloom-7b')
model = AutoModelForCausalLM.from_pretrained('lzw1008/Emobloom-7b', device_map='auto')
In this example, AutoTokenizer
is used to load the tokenizer, and AutoModelForCausalLM
is used to load the model. The device_map='auto'
argument is used to automatically use the GPU if it's available.
๐ป Usage Examples
Basic Usage
The following are some prompt examples:
Emotion intensity
Human:
Task: Assign a numerical value between 0 (least E) and 1 (most E) to represent the intensity of emotion E expressed in the text.
Text: @CScheiwiller can't stop smiling ๐๐๐
Emotion: joy
Intensity Score:
Assistant:
>>0.896
Sentiment strength
Human:
Task: Evaluate the valence intensity of the writer's mental state based on the text, assigning it a real - valued score from 0 (most negative) to 1 (most positive).
Text: Happy Birthday shorty. Stay fine stay breezy stay wavy @daviistuart ๐
Intensity Score:
Assistant:
>>0.879
Sentiment classification
Human:
Task: Categorize the text into an ordinal class that best characterizes the writer's mental state, considering various degrees of positive and negative sentiment intensity. 3: very positive mental state can be inferred. 2: moderately positive mental state can be inferred. 1: slightly positive mental state can be inferred. 0: neutral or mixed mental state can be inferred. -1: slightly negative mental state can be inferred. -2: moderately negative mental state can be inferred. -3: very negative mental state can be inferred
Text: Beyoncรฉ resentment gets me in my feelings every time. ๐ฉ
Intensity Class:
Assistant:
>>-3: very negative emotional state can be inferred
Emotion classification
Human:
Task: Categorize the text's emotional tone as either 'neutral or no emotion' or identify the presence of one or more of the given emotions (anger, anticipation, disgust, fear, joy, love, optimism, pessimism, sadness, surprise, trust).
Text: Whatever you decide to do make sure it makes you #happy.
This text contains emotions:
Assistant:
>>joy, love, optimism
The task description can be adjusted according to the specific task.
๐ License
EmoLLMs series are licensed under MIT. For more details, please see the MIT file.
๐ Documentation
Citation
If you use the series of EmoLLMs in your work, please cite our paper:
@article{liu2024emollms,
title={EmoLLMs: A Series of Emotional Large Language Models and Annotation Tools for Comprehensive Affective Analysis},
author={Liu, Zhiwei and Yang, Kailai and Zhang, Tianlin and Xie, Qianqian and Yu, Zeping and Ananiadou, Sophia},
journal={arXiv preprint arXiv:2401.08508},
year={2024}
}