đ HuggingArtists Model - Little Big
This is a HuggingArtists model based on the lyrics of Little Big. It can generate text similar to the artist's style, offering a unique way to explore the charm of Little Big's music.
đ Quick Start
You can use this model directly with a pipeline for text generation:
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/little-big')
generator("I am", num_return_sequences=5)
Or with Transformers library:
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/little-big")
model = AutoModelWithLMHead.from_pretrained("huggingartists/little-big")
⨠Features
- Artist-Specific Generation: Generate text in the style of Little Big based on their lyrics.
- Easy to Use: Can be used directly with common NLP libraries like
transformers
and datasets
.
đĻ Installation
No specific installation steps are provided in the original document. However, you need to have the transformers
and datasets
libraries installed to use the model. You can install them using pip
:
pip install transformers datasets
đģ Usage Examples
Basic Usage
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/little-big')
generator("I am", num_return_sequences=5)
Advanced Usage
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/little-big")
model = AutoModelWithLMHead.from_pretrained("huggingartists/little-big")
input_text = "I am"
input_ids = tokenizer.encode(input_text, return_tensors='pt')
output = model.generate(input_ids, max_length=100, num_beams=5, no_repeat_ngram_size=2, early_stopping=True)
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
đ Documentation
How does it work?
To understand how the model was developed, check the W&B report.
Training data
The model was trained on lyrics from Little Big.
Dataset is available here.
And can be used with:
from datasets import load_dataset
dataset = load_dataset("huggingartists/little-big")
Explore the data, which is tracked with W&B artifacts at every step of the pipeline.
Training procedure
The model is based on a pre-trained GPT-2 which is fine-tuned on Little Big's lyrics.
Hyperparameters and metrics are recorded in the W&B training run for full transparency and reproducibility.
At the end of training, the final model is logged and versioned.
đ§ Technical Details
The model is fine-tuned from the pre-trained GPT-2 model using the lyrics of Little Big as training data. By adjusting the model's parameters through fine-tuning, it can better capture the language patterns and style characteristics in Little Big's lyrics, thus generating more relevant and high - quality text.
đ License
No license information is provided in the original document.
Limitations and bias
The model suffers from the same limitations and bias as GPT-2.
In addition, the data present in the user's tweets further affects the text generated by the model.
About
Built by Aleksey Korshuk



For more details, visit the project repository.
