đ FinBERT Fine-Tuned on Financial Sentiment (Financial PhraseBank + GitHub Dataset)
This is a fine - tuned version of FinBERT for financial sentiment classification, capable of categorizing financial text into negative, neutral, and positive classes.
đ Quick Start
You can use the model via Hugging Face Transformers:
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "Driisa/finbert-finetuned-github"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The company's stock has seen significant growth this quarter."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax().item()
print(f"Predicted Sentiment: {['Negative', 'Neutral', 'Positive'][predicted_class]}")
⨠Features
- This model is a fine - tuned version of FinBERT (
ProsusAI/finbert
) trained for financial sentiment classification.
- It can classify financial text into three categories: Negative (0), Neutral (1), and Positive (2).
đĻ Installation
The README does not provide installation steps, so this section is skipped.
đģ Usage Examples
Basic Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model_name = "Driisa/finbert-finetuned-github"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
text = "The company's stock has seen significant growth this quarter."
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True, max_length=128)
outputs = model(**inputs)
predicted_class = outputs.logits.argmax().item()
print(f"Predicted Sentiment: {['Negative', 'Neutral', 'Positive'][predicted_class]}")
đ Documentation
đ Dataset Used
This model was trained on:
â
Financial PhraseBank - A widely used financial sentiment dataset.
â
GitHub Generated Sentiment Dataset - An additional dataset to test the model.
âī¸ Training Parameters
Parameter |
Value |
Model Architecture |
FinBERT (based on BERT) |
Batch Size |
8 |
Learning Rate |
2e-5 |
Epochs |
3 |
Optimizer |
AdamW |
Evaluation Metric |
F1-Score, Accuracy |
đ Model Performance
Dataset |
Accuracy |
F1 (Weighted) |
Precision |
Recall |
Financial PhraseBank (Train) |
95.21% |
95.23% |
95.32% |
95.21% |
GitHub Test Set |
64.42% |
64.34% |
70.52% |
64.42% |
đ Intended Use
This model is designed for:
â
Financial Analysts & Investors to assess sentiment of financial sentences in ex. reports, news, and stock discussions.
â
Financial Institutions for NLP-based sentiment analysis in automated trading.
â
AI Researchers exploring financial NLP models.
â ī¸ Limitations
â ī¸ Important Note
May not generalize well to datasets with very different financial language.
Might require fine-tuning for specific financial domains (crypto, banking, startups).
đ License
This model is licensed under the MIT license.