đ gpt-neo-1.3B-emailgen
This model is a fine - tuned version of EleutherAI/gpt - neo-1.3B on the postbot/multi-emails-100k dataset. It's designed for text generation, especially email generation, which can help users quickly generate various types of emails.
đ Quick Start
You can use the widgets below to test the model's email generation capabilities:
- Email to Prof:
- Input: 'Good Morning Professor Beans,
Hope you are doing well. I just wanted to reach out and ask if differential calculus
will be on the exam'
- Newsletter:
- Input: 'Hey ,
Thank you for signing up for my weekly newsletter. Before we get started, you''ll
have to confirm your email address.'
- Office Hours:
- Input: 'Hi ,
I hope this email finds you well. I wanted to reach out and ask about office hours'
- Festival:
- Input: 'Greetings ,
I hope you had a splendid evening at the Company sausage eating festival. I am
reaching out because'
- Event:
- Input: 'Good Morning Harold,
I was wondering when the next'
- URGENT:
- Input: 'URGENT - I need the TPS reports'
- Emails that Find You:
- Input: 'Hi Archibald,
I hope this email finds you extremely well.'
- Checking In:
- Input: 'Hello there.
I just wanted to reach out and check in to'
- Work Well:
- Input: 'Hello ,
I hope this email finds you well. I wanted to reach out and see if you''ve enjoyed
your time with us'
- Catch Up:
- Input: 'Hi ,
I hope this email finds you well. I wanted to reach out and see if we could catch
up'
- Grocery:
- Input: 'I'm and I just moved into the area and wanted to reach out and get
some details on where I could get groceries and'
⨠Features
- Text Generation: Capable of generating various types of text, especially emails.
- Fine - Tuned: Based on the EleutherAI/gpt - neo-1.3B model, fine - tuned on specific datasets for better performance in email generation.
đĻ Installation
No installation steps are provided in the original document, so this section is skipped.
đģ Usage Examples
No code examples are provided in the original document, so this section is skipped.
đ Documentation
Model description
This model is a fine - tuned version of EleutherAI/gpt - neo-1.3B on the postbot/multi-emails-100k dataset. It achieves a loss of 1.6930 on the evaluation set.
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi - GPU
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon = 1e - 08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.02
- num_epochs: 2
Training results
Training Loss |
Epoch |
Step |
Validation Loss |
1.8669 |
1.0 |
789 |
1.7866 |
1.4049 |
2.0 |
1578 |
1.6930 |
Framework versions
- Transformers 4.22.2
- Pytorch 1.10.0+cu113
- Tokenizers 0.12.1
Detailed results can be found here
Metric |
Value |
Avg. |
33.47 |
AI2 Reasoning Challenge (25 - Shot) |
29.95 |
HellaSwag (10 - Shot) |
47.95 |
MMLU (5 - Shot) |
24.11 |
TruthfulQA (0 - shot) |
42.55 |
Winogrande (5 - shot) |
56.27 |
GSM8k (5 - shot) |
0.00 |
đ§ Technical Details
No technical details provided in the original document, so this section is skipped.
đ License
This model is licensed under the Apache - 2.0 license.
Model Information Table
Property |
Details |
Model Type |
Fine - tuned version of EleutherAI/gpt - neo-1.3B |
Training Data |
postbot/multi-emails-100k, aeslc |
Parameters |
min_length: 32, max_length: 128, no_repeat_ngram_size: 2, do_sample: true, temperature: 0.4, top_k: 30, top_p: 0.9, repetition_penalty: 3.5, length_penalty: 0.9 |