đ Vortex-3B
Vortex-3B is a 2.78 billion parameter causal language model. It is derived from EleutherAI's Pythia - 2.8b and fine - tuned on the Vortex-50k dataset, offering capabilities in text generation.
đ Quick Start
Vortex-3B is a causal language model developed by OEvortex. It is based on EleutherAI's Pythia-2.8b and fine - tuned on the Vortex-50k dataset.
from transformers import pipeline
pipe = pipeline("text-generation", model="OEvortex/vortex-3b")
text = "Once upon a time"
generated_text = pipe(text, max_length=100, do_sample=True)[0]['generated_text']
print(generated_text)
đģ Usage Examples
Basic Usage
from transformers import pipeline
pipe = pipeline("text-generation", model="OEvortex/vortex-3b")
text = "Once upon a time"
generated_text = pipe(text, max_length=100, do_sample=True)[0]['generated_text']
print(generated_text)
đ Documentation
Model Information
Property |
Details |
Model Type |
Causal Language Model |
Training Data |
OEvortex/Vortex-50k |
Pipeline Tag |
Text Generation |
Model Index
- Name: vortex-3b
- Results:
- Task: Text Generation
- Dataset: AI2 Reasoning Challenge (25 - Shot)
- Type: ai2_arc
- Config: ARC - Challenge
- Split: test
- Args: num_few_shot = 25
- Metrics:
- Type: acc_norm
- Value: 31.91
- Name: normalized accuracy
- Source: Open LLM Leaderboard
- Task: Text Generation
- Dataset: HellaSwag (10 - Shot)
- Type: hellaswag
- Split: validation
- Args: num_few_shot = 10
- Metrics:
- Type: acc_norm
- Value: 56.89
- Name: normalized accuracy
- Source: Open LLM Leaderboard
- Task: Text Generation
- Dataset: MMLU (5 - Shot)
- Type: cais/mmlu
- Config: all
- Split: test
- Args: num_few_shot = 5
- Metrics:
- Type: acc
- Value: 27.32
- Name: accuracy
- Source: Open LLM Leaderboard
- Task: Text Generation
- Dataset: TruthfulQA (0 - shot)
- Type: truthful_qa
- Config: multiple_choice
- Split: validation
- Args: num_few_shot = 0
- Metrics:
- Source: Open LLM Leaderboard
- Task: Text Generation
- Dataset: Winogrande (5 - shot)
- Type: winogrande
- Config: winogrande_xl
- Split: validation
- Args: num_few_shot = 5
- Metrics:
- Type: acc
- Value: 60.14
- Name: accuracy
- Source: Open LLM Leaderboard
- Task: Text Generation
- Dataset: GSM8k (5 - shot)
- Type: gsm8k
- Config: main
- Split: test
- Args: num_few_shot = 5
- Metrics:
- Type: acc
- Value: 0.91
- Name: accuracy
- Source: Open LLM Leaderboard
Evaluation Results
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric |
vortex 3b |
vortex 3b - v2 |
dolly - v2 - 3b |
pythia - 2.8b - deduped |
Avg. |
35.76 |
37.46 |
25.26 |
36.72 |
AI2 Reasoning Challenge (25 - Shot) |
31.91 |
39.68 |
22.83 |
36.26 |
HellaSwag (10 - Shot) |
56.89 |
65.04 |
26.55 |
60.66 |
MMLU (5 - Shot) |
27.32 |
25.09 |
24.7 |
26.78 |
TruthfulQA (0 - shot) |
37.39 |
33.80 |
0 |
35.56 |
Winogrande (5 - shot) |
60.14 |
59.12 |
59.43 |
60.22 |
GSM8k (5 - shot) |
0.91 |
2.05 |
1.86 |
0.83 |
đ License
This model is under the HelpingAI License.