đ GPT2-Horoscopes
A fine-tuned GPT2 model for generating horoscopes based on a specific category.
đ Quick Start
You can directly access the application by clicking the following button:

⨠Features
- Category-based Generation: Generate horoscopes according to different categories such as general, career, love, wellness, and birthday.
- Easy to Use: Can be used directly with the HuggingFace
pipeline
API.
đĻ Installation
The model can be used directly with the HuggingFace pipeline
API. You can install the necessary libraries using the following code:
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
đģ Usage Examples
Basic Usage
The input text format is <|category|> {category_type} <|horoscope|>
. Supported categories include general, career, love, wellness, birthday.
from transformers import AutoTokenizer, AutoModelWithLMHead
import torch
tokenizer = AutoTokenizer.from_pretrained("shahp7575/gpt2-horoscopes")
model = AutoModelWithLMHead.from_pretrained("shahp7575/gpt2-horoscopes")
prompt = "<|category|> career <|horoscope|>"
prompt_encoded = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0)
sample_outputs = model.generate(prompt_encoded,
do_sample=True,
top_k=40,
max_length = 300,
top_p=0.95,
temperature=0.95,
num_return_sequences=1)
Advanced Usage
For reference, you can also use this generation script.
đ Documentation
Training Data
The dataset is scraped from Horoscopes.com for 5 categories with a total of ~12k horoscopes. You can find the dataset on Kaggle.
Training Procedure
The model uses the GPT2 checkpoint and then is fine-tuned on the horoscopes dataset for 5 different categories. Since the goal of the fine-tuned model was also to understand different horoscopes for different category types, the categories are added to the training data separated by the special token <|category|>
.
Training Parameters:
Parameter |
Value |
EPOCHS |
5 |
LEARNING RATE |
5e-4 |
WARMUP STEPS |
1e2 |
EPSILON |
1e-8 |
SEQUENCE LENGTH |
300 |
Evaluation Results
Loss: 2.77
Limitations
This model is only fine-tuned on horoscopes by categories. It does not, and neither attempts to, represent actual horoscopes. It is developed only for educational and learning purposes.
đ§ Technical Details
The model is fine-tuned based on the GPT2 architecture. By adding category information to the training data, it can generate horoscopes according to different category types.
đ References