EVA Qwen2.5 14B V0.2
A specialized model focused on role-playing/story writing, based on Qwen2.5-14B with full-parameter fine-tuning, incorporating synthetic and natural data.
Downloads 287
Release Time : 11/6/2024
Model Overview
This model specializes in role-playing and story writing tasks, optimizing coherence, instruction-following, and long-context understanding through full-parameter fine-tuning. It adopts the ChatML prompt format and is suitable for creative writing and interactive role-playing scenarios.
Model Features
Optimized Role-Playing Capability
Fine-tuned specifically for role-playing scenarios, capable of generating coherent dialogues that align with character settings.
Long-Context Understanding
Supports a long-context window of 10240 tokens, suitable for developing complex storylines.
Creative Writing Enhancement
Incorporates multiple writing datasets to improve diversity and creativity in story creation.
Improved Instruction-Following
Significantly enhanced instruction comprehension and execution compared to v0.1.
Model Capabilities
Role-playing dialogue generation
Story creation
Creative writing
Long-form text generation
Instruction following
Use Cases
Creative Writing
Novel Writing
Generate coherent storylines and character dialogues
Can produce long-form texts with consistent character personalities and story development.
Script Writing
Generate script segments based on prompts
Maintains consistency in scenes and characters.
Interactive Entertainment
Role-Playing Games
Provide dynamic dialogues as game NPCs
Generates responses that align with character settings.
Virtual Character Chat
Engage in role-playing dialogues with users
Maintains character consistency and develops storylines.
đ EVA Qwen2.5-14B v0.2
A RP/storywriting specialist model, full-parameter finetune of Qwen2.5-14B on mixture of synthetic and natural data.
đ Quick Start
This is a role-playing and storywriting specialist model, which is a full-parameter fine-tuning of Qwen2.5-14B on a mixture of synthetic and natural data. It uses the Celeste 70B 0.1 data mixture and greatly expands it to improve the versatility, creativity, and "flavor" of the resulting model.
⨠Features
- Version 0.2 Improvements: Now using the refined dataset from 32B 0.2, with major improvements in coherence, instruction following, and long-context comprehension over 14B v0.1.
- Prompt Format: The prompt format is ChatML.
đĻ Installation
No installation steps are provided in the original document.
đģ Usage Examples
No code examples are provided in the original document.
đ Documentation
Recommended Sampler Values
- Temperature: 0.8
- Min-P: 0.05
- Top-A: 0.3
- Repetition Penalty: 1.03
Recommended SillyTavern Presets (via CalamitousFelicitousness)
Training Data
- Celeste 70B 0.1 data mixture minus Opus Instruct subset. See that model's card for details.
- Kalomaze's Opus_Instruct_25k dataset, filtered for refusals.
- A subset (1k rows) of ChatGPT-4o-WritingPrompts by Gryphe
- A subset (2k rows) of Sonnet3.5-Charcards-Roleplay by Gryphe
- Synthstruct and SynthRP datasets by Epiculous
- A subset from Dolphin-2.9.3, including filtered version of not_samantha and a small subset of systemchat.
Training Time and Hardware
- 3 hours on 8xH100 SXM, provided by FeatherlessAI
Model Creators
The model was created by Kearm, Auri, and Cahvay.
Special Thanks
- To Cahvay: For his work on investigating and reprocessing the corrupted dataset, removing the single biggest source of data poisoning.
- To FeatherlessAI: For generously providing 8xH100 SXM node for training of this model.
- To Gryphe, Lemmy, Kalomaze, Nopm, Epiculous, and CognitiveComputations for the data.
- And to Allura-org for support, feedback, beta-testing, and doing quality control of EVA models.
đ§ Technical Details
See axolotl config
axolotl version: 0.4.1
base_model: Qwen/Qwen2.5-14B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
# plugins:
# - axolotl.integrations.spectrum.SpectrumPlugin
# spectrum_top_fraction: 0.5
# # Optional if using a pre-scanned model as your base_model. Useful if using a model mirror
# spectrum_model_name: Qwen/Qwen2.5-32B
datasets:
- path: datasets/Celeste_Filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.001
output_dir: ./EVA-Qwen2.5-14B-SFFT-v0.2
sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 64
# lora_alpha: 128
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
base_model: Qwen/Qwen2.5-14B
load_in_8bit: false
load_in_4bit: false
strict: false
plugins:
- axolotl.integrations.liger.LigerPlugin
liger_rope: true
liger_rms_norm: true
liger_swiglu: true
liger_fused_linear_cross_entropy: true
datasets:
- path: datasets/Celeste_Filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/deduped_not_samantha_norefusals.jsonl
type: sharegpt
- path: datasets/deduped_SynthRP-Gens_processed_ShareGPT_converted_cleaned.jsonl
type: sharegpt
- path: datasets/deduped_Synthstruct-Gens_processed_sharegpt_converted_cleaned.jsonl
type: sharegpt
- path: datasets/Gryphe-4o-WP-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/opus-instruct-22k-no_refusals-filtered_utf8fix.jsonl
type: sharegpt
- path: datasets/Sonnet3-5-charcard-names-filtered-sharegpt_utf8fix.jsonl
type: sharegpt
- path: datasets/SystemChat_subset_filtered_sharegpt_utf8fix.jsonl
type: sharegpt
chat_template: chatml
shuffle_merged_datasets: true
val_set_size: 0.005
output_dir: ./EVA-Qwen2.5-14B-SFFT-v0.2
sequence_len: 10240
sample_packing: true
eval_sample_packing: false
pad_to_sequence_len: true
# adapter: qlora
# lora_model_dir:
# lora_r: 32
# lora_alpha: 16
# lora_dropout: 0.05
# lora_target_linear: true
# peft_use_dora: true
unfrozen_parameters:
- ^lm_head.weight$
- ^model.embed_tokens.weight$
# mlp.down_proj layers
- model.layers.1.mlp.down_proj
- model.layers.35.mlp.down_proj
- model.layers.38.mlp.down_proj
- model.layers.37.mlp.down_proj
- model.layers.36.mlp.down_proj
- model.layers.15.mlp.down_proj
- model.layers.11.mlp.down_proj
- model.layers.12.mlp.down_proj
- model.layers.34.mlp.down_proj
- model.layers.44.mlp.down_proj
- model.layers.45.mlp.down_proj
- model.layers.9.mlp.down_proj
- model.layers.41.mlp.down_proj
- model.layers.33.mlp.down_proj
- model.layers.43.mlp.down_proj
- model.layers.40.mlp.down_proj
- model.layers.13.mlp.down_proj
- model.layers.8.mlp.down_proj
- model.layers.39.mlp.down_proj
- model.layers.10.mlp.down_proj
- model.layers.14.mlp.down_proj
- model.layers.16.mlp.down_proj
- model.layers.31.mlp.down_proj
- model.layers.32.mlp.down_proj
# mlp.gate_proj layers
- model.layers.1.mlp.gate_proj
- model.layers.44.mlp.gate_proj
- model.layers.46.mlp.gate_proj
- model.layers.45.mlp.gate_proj
- model.layers.43.mlp.gate_proj
- model.layers.47.mlp.gate_proj
- model.layers.42.mlp.gate_proj
- model.layers.32.mlp.gate_proj
- model.layers.27.mlp.gate_proj
- model.layers.33.mlp.gate_proj
- model.layers.28.mlp.gate_proj
- model.layers.39.mlp.gate_proj
- model.layers.41.mlp.gate_proj
- model.layers.40.mlp.gate_proj
- model.layers.30.mlp.gate_proj
- model.layers.29.mlp.gate_proj
- model.layers.31.mlp.gate_proj
- model.layers.37.mlp.gate_proj
- model.layers.26.mlp.gate_proj
- model.layers.10.mlp.gate_proj
- model.layers.38.mlp.gate_proj
- model.layers.36.mlp.gate_proj
- model.layers.12.mlp.gate_proj
- model.layers.13.mlp.gate_proj
# mlp.up_proj layers
- model.layers.1.mlp.up_proj
- model.layers.13.mlp.up_proj
- model.layers.11.mlp.up_proj
- model.layers.14.mlp.up_proj
- model.layers.15.mlp.up_proj
- model.layers.12.mlp.up_proj
- model.layers.8.mlp.up_proj
- model.layers.16.mlp.up_proj
- model.layers.9.mlp.up_proj
- model.layers.19.mlp.up_proj
- model.layers.10.mlp.up_proj
- model.layers.7.mlp.up_proj
- model.layers.17.mlp.up_proj
- model.layers.20.mlp.up_proj
- model.layers.21.mlp.up_proj
- model.layers.18.mlp.up_proj
- model.layers.37.mlp.up_proj
- model.layers.38.mlp.up_proj
- model.layers.39.mlp.up_proj
- model.layers.42.mlp.up_proj
- model.layers.41.mlp.up_proj
- model.layers.27.mlp.up_proj
- model.layers.28.mlp.up_proj
- model.layers.36.mlp.up_proj
# self_attn.k_proj layers
- model.layers.47.self_attn.k_proj
- model.layers.39.self_attn.k_proj
- model.layers.41.self_attn.k_proj
- model.layers.37.self_attn.k_proj
- model.layers.35.self_attn.k_proj
- model.layers.44.self_attn.k_proj
- model.layers.38.self_attn.k_proj
- model.layers.14.self_attn.k_proj
- model.layers.7.self_attn.k_proj
- model.layers.12.self_attn.k_proj
- model.layers.11.self_attn.k_proj
- model.layers.32.self_attn.k_proj
- model.layers.10.self_attn.k_proj
- model.layers.8.self_attn.k_proj
- model.layers.6.self_attn.k_proj
- model.layers.9.self_attn.k_proj
- model.layers.45.self_attn.k_proj
- model.layers.42.self_attn.k_proj
- model.layers.40.self_attn.k_proj
- model.layers.5.self_attn.k_proj
- model.layers.0.self_attn.k_proj
- model.layers.33.self_attn.k_proj
- model.layers.34.self_attn.k_proj
- model.layers.13.self_attn.k_proj
# self_attn.o_proj layers
- model.layers.12.self_attn.o_proj
- model.layers.5.self_attn.o_proj
- model.layers.14.self_attn.o_proj
- model.layers.16.self_attn.o_proj
- model.layers.20.self_attn.o_proj
- model.layers.13.self_attn.o_proj
- model.layers.11.self_attn.o_proj
- model.layers.4.self_attn.o_proj
- model.layers.6.self_attn.o_proj
- model.layers.19.self_attn.o_proj
- model.layers.7.self_attn.o_proj
- model.layers.18.self_attn.o_proj
- model.layers.8.self_attn.o_proj
- model.layers.38.self_attn.o_proj
- model.layers.15.self_attn.o_proj
- model.layers.17.self_attn.o_proj
- model.layers.9.self_attn.o_proj
- model.layers.10.self_attn.o_proj
- model.layers.21.self_attn.o_proj
- model.layers.28.self_attn.o_proj
- model.layers.32.self_attn.o_proj
- model.layers.35.self_attn.o_proj
- model.layers.39.self_attn.o_proj
- model.layers.3.self_attn.o_proj
# self_attn.q_proj layers
- model.layers.1.self_attn.q_proj
- model.layers.2.self_attn.q_proj
- model.layers.3.self_attn.q_proj
- model.layers.44.self_attn.q_proj
- model.layers.29.self_attn.q_proj
- model.layers.45.self_attn.q_proj
- model.layers.43.self_attn.q_proj
- model.layers.32.self_attn.q_proj
- model.layers.38.self_attn.q_proj
- model.layers.19.self_attn.q_proj
- model.layers.42.self_attn.q_proj
- model.layers.34.self_attn.q_proj
- model.layers.36.self_attn.q_proj
- model.layers.40.self_attn.q_proj
- model.layers.26.self_attn.q_proj
- model.layers.20.self_attn.q_proj
- model.layers.28.self_attn.q_proj
- model.layers.39.self_attn.q_proj
- model.layers.41.self_attn.q_proj
- model.layers.33.self_attn.q_proj
- model.layers.35.self_attn.q_proj
- model.layers.25.self_attn.q_proj
- model.layers.30.self_attn.q_proj
- model.layers.27.self_attn.q_proj
# self_attn.v_proj layers
- model.layers.0.self_attn.v_proj
- model.layers.7.self_attn.v_proj
- model.layers.39.self_attn.v_proj
- model.layers.31.self_attn.v_proj
- model.layers.15.self_attn.v_proj
- model.layers.10.self_attn.v_proj
- model.layers.41.self_attn.v_proj
- model.layers.32.self_attn.v_proj
- model.layers.6.self_attn.v_proj
- model.layers.33.self_attn.v_proj
- model.layers.42.self_attn.v_proj
- model.layers.29.self_attn.v_proj
- model.layers.9.self_attn.v_proj
- model.layers.14.self_attn.v_proj
- model.layers.35.self_attn.v_proj
- model.layers.38.self_attn.v_proj
- model.layers.13.self_attn.v_proj
- model.layers.30.self_attn.v_proj
- model.layers.34.self_attn.v_proj
- model.layers.5.self_attn.v_proj
- model.layers.28.self_attn.v_proj
- model.layers.37.self_attn.v_proj
- model.layers.27.self_attn.v_proj
- model.layers.11.self_attn.v_proj
wandb_project: EVA-Qwen2.5-14B-SFFT-v0.2
wandb_entity:
wandb_watch:
wandb_name: Unit-02
wandb_log_model:
gradient_accumulation_steps: 8
micro_batch_size: 2
num_epochs: 3
optimizer: paged_ademamix_8bit
lr_scheduler: cosine
learning_rate: 0.00005
max_grad_norm: 3
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: "unsloth"
# gradient_checkpointing_kwargs:
# use_reentrant: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 20
evals_per_epoch: 4
saves_per_epoch: 4
save_safetensors: true
hub_model_id:
hub_strategy:
debug:
deepspeed: deepspeed_configs/zero3_bf16.json
weight_decay: 0.1
# fsdp:
# - full_shard
# - auto_wrap
# fsdp_config:
# fsdp_limit_all_gathers: true
# fsdp_sync_module_states: false
# fsdp_offload_params: true
# fsdp_cpu_ram_efficient_loading: true
# fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
# fsdp_transformer_layer_cls_to_wrap: Q
đ License
The model is licensed under the apache-2.0
license.
Phi 2 GGUF
Other
Phi-2 is a small yet powerful language model developed by Microsoft, featuring 2.7 billion parameters, focusing on efficient inference and high-quality text generation.
Large Language Model Supports Multiple Languages
P
TheBloke
41.5M
205
Roberta Large
MIT
A large English language model pre-trained with masked language modeling objectives, using improved BERT training methods
Large Language Model English
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT is a distilled version of the BERT base model, maintaining similar performance while being more lightweight and efficient, suitable for natural language processing tasks such as sequence classification and token classification.
Large Language Model English
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct is a multilingual large language model optimized for multilingual dialogue use cases, excelling in common industry benchmarks.
Large Language Model English
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
9.6M
664
Roberta Base
MIT
An English pre-trained model based on Transformer architecture, trained on massive text through masked language modeling objectives, supporting text feature extraction and downstream task fine-tuning
Large Language Model English
R
FacebookAI
9.3M
488
Opt 125m
Other
OPT is an open pre-trained Transformer language model suite released by Meta AI, with parameter sizes ranging from 125 million to 175 billion, designed to match the performance of the GPT-3 series while promoting open research in large-scale language models.
Large Language Model English
O
facebook
6.3M
198
1
A pretrained model based on the transformers library, suitable for various NLP tasks
Large Language Model
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1 is Meta's multilingual large language model series, featuring 8B, 70B, and 405B parameter scales, supporting 8 languages and code generation, with optimized multilingual dialogue scenarios.
Large Language Model
Transformers Supports Multiple Languages

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
The T5 Base Version is a text-to-text Transformer model developed by Google with 220 million parameters, supporting multilingual NLP tasks.
Large Language Model Supports Multiple Languages
T
google-t5
5.4M
702
Featured Recommended AI Models
Š 2025AIbase