L3 Deluxe Scrambled Eggs On Toast 8B GGUF
A role-playing model based on the fusion of 36 models, created through 23 merging steps, combining creativity and intelligence.
Downloads 250
Release Time : 7/21/2024
Model Overview
This model focuses on role-playing tasks by merging multiple Llama-3 variant models, aiming to balance creativity and intelligent performance. The gradient method is used to optimize the model weight distribution, with the core model weight area enhancing intelligence and the rest enhancing creativity.
Model Features
Multi-model fusion
Integrate the advantages of 36 different models and optimize through 23 merging steps.
Intelligence-creativity balance
Allocate weights through the gradient method, with the core area enhancing intelligence and the peripheral weights enhancing creativity.
Role-playing optimization
Designed specifically for role-playing scenarios, it is recommended to use the SillyTavern preset.
Model Capabilities
Role-playing dialogue generation
Creative text writing
Instruction following
Long context processing (2048 tokens recommended)
Use Cases
Entertainment
Interactive role-playing
Conduct immersive dialogue interactions with AI characters.
Can generate natural responses that match the character settings.
Creative writing
Story generation
Assist creators in fantasy literature creation.
Provide imaginative narrative content.
đ QuantFactory/L3-Deluxe-Scrambled-Eggs-On-Toast-8B-GGUF
This is a quantized version of Casual-Autopsy/L3-Deluxe-Scrambled-Eggs-On-Toast-8B created using llama.cpp, aiming to provide a more efficient and practical model for text generation.
⨠Features
- Role - play Capability: L3-Deluxe-Scrambled-Eggs-On-Toast-8B is a role - play model merger that combines 36 models in 23 merging steps. It uses gradients to balance creativity and intelligence.
- Inspiration from Multiple Models: Inspired by models such as grimjim/kunoichi-lemon-royale-v3-32K-7B, invisietch/EtherealRainbow-v0.3-8B, and PJMixers/LLaMa-3-CursedStock-v2.0-8B.
- Customizable Settings: Offers recommended settings for different scenarios, including more creativity and potentially more adherencey.
đ Documentation
Instruct Format
Llama 3
Settings/Presets
Instruct/Context
It is recommended to use Virt - io's SillyTavern Presets.
Sampler Settings
Here are the current recommended settings for more creativity:
Top K: 60
Min P: 0.035
Rep Pen: 1.05
Rep Pen Range: 2048
Pres Pen: 0.15
Smoothing Factor: 0.25
Dyna Temp:
Min Temp: 0.75
Max Temp: 1.5
Expo: 0.85
No known presets for more adherencey. Please recommend some if you can!
Quants
- Weighted quants: By mradermacher
- Static quants: By mradermacher
đ§ Technical Details
Secret Sauce
Models Used
L3 - Scrambled - Eggs - On - Toast - 8B is a merge of the following models using LazyMergekit:
- Sao10K/L3-8B-Stheno-v3.2
- ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
- Nitral-AI/Hathor_Stable-v0.2-L3-8B
- NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
- Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- openlynn/Llama-3-Soliloquy-8B-v2
- NousResearch/Meta-Llama-3-8B-Instruct
- turboderp/llama3-turbcat-instruct-8b
- VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
- TIGER-Lab/MAmmoTH2-8B-Plus
- jondurbin/bagel-8b-v1.0
- abacusai/Llama-3-Smaug-8B
- failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
- lodrick-the-lafted/Limon-8B
- vicgalle/Configurable-Llama-3-8B-v0.3
- Undi95/Llama3-Unholy-8B-OAS
- Undi95/Unholy-8B-DPO-OAS
- WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- migtissera/Tess-2.0-Llama-3-8B
- defog/llama-3-sqlcoder-8b
- HPAI-BSC/Llama3-Aloe-8B-Alpha
- maldv/llama-3-fantasy-writer-8b
- lodrick-the-lafted/Olethros-8B
- Magpie-Align/Llama-3-8B-ShareGPT-112K
- Magpie-Align/Llama-3-8B-WildChat
- Magpie-Align/Llama-3-8B-Tulu-330K
- Magpie-Align/Llama-3-8B-OpenHermes-243K
- Magpie-Align/Llama-3-8B-WizardLM-196K
- Magpie-Align/Llama-3-8B-Ultrachat-200K
- refuelai/Llama-3-Refueled
- Danielbrdz/Barcenas-Llama3-8b-ORPO
- migtissera/Llama-3-8B-Synthia-v3.5
- chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
- chujiezheng/LLaMA3-iterative-DPO-final-ExPO
- chargoddard/prometheus-2-llama-3-8b
YAML Configs Used
The following YAML configs were used to make this model:
Eggs-and-Bread-RP-pt.1
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-RP-pt.2
models:
- model: Sao10K/L3-8B-Stheno-v3.2
- model: ChaoticNeutrals/Poppy_Porpoise-1.0-L3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: NeverSleep/Llama-3-Lumimaid-8B-v0.1-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: openlynn/Llama-3-Soliloquy-8B-v2
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Egg-and-Bread-RP
models:
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-RP-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-IQ-pt.1
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: turboderp/llama3-turbcat-instruct-8b
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: jondurbin/bagel-8b-v1.0
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: abacusai/Llama-3-Smaug-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-IQ-pt.2
models:
- model: NousResearch/Meta-Llama-3-8B-Instruct
- model: turboderp/llama3-turbcat-instruct-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: VAGOsolutions/Llama-3-SauerkrautLM-8b-Instruct
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: TIGER-Lab/MAmmoTH2-8B-Plus
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: jondurbin/bagel-8b-v1.0
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: abacusai/Llama-3-Smaug-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-IQ
models:
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-IQ-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-Uncen-pt.1
models:
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: lodrick-the-lafted/Limon-8B
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Undi95/Llama3-Unholy-8B-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Undi95/Unholy-8B-DPO-OAS
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Uncen-pt.2
models:
- model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
- model: AwanLLM/Awanllm-Llama-3-8B-Cumulus-v1.0
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: lodrick-the-lafted/Limon-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Undi95/Llama3-Unholy-8B-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Undi95/Unholy-8B-DPO-OAS
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: failspy/Meta-Llama-3-8B-Instruct-abliterated-v3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Uncen
models:
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Uncen-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Scrambled-Eggs-On-Toast-1
models:
- model: Casual-Autopsy/Eggs-and-Bread-RP
- model: Casual-Autopsy/Eggs-and-Bread-Uncen
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-RP
parameters:
t:
- value: [0.1, 0.15, 0.2, 0.4, 0.6, 0.4, 0.2, 0.15, 0.1]
dtype: bfloat16
L3-Scrambled-Eggs-On-Toast-8B
models:
- model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
- model: Casual-Autopsy/Eggs-and-Bread-IQ
merge_method: slerp
base_model: Casual-Autopsy/Scrambled-Eggs-On-Toast-1
parameters:
t:
- value: [0.7, 0.5, 0.3, 0.25, 0.2, 0.25, 0.3, 0.5, 0.7]
dtype: bfloat16
Eggs-and-Bread-Misc1-pt.1
models:
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- model: migtissera/Tess-2.0-Llama-3-8B
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: defog/llama-3-sqlcoder-8b
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: maldv/llama-3-fantasy-writer-8b
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: lodrick-the-lafted/Olethros-8B
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc1-pt.2
models:
- model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
- model: migtissera/Tess-2.0-Llama-3-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: defog/llama-3-sqlcoder-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: maldv/llama-3-fantasy-writer-8b
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: lodrick-the-lafted/Olethros-8B
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-Misc1
models:
- model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-Misc1-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
Eggs-and-Bread-FFT-pt.1
models:
- model: Magpie-Align/Llama-3-8B-ShareGPT-112K
- model: Magpie-Align/Llama-3-8B-WildChat
parameters:
density: 0.5
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-Tulu-330K
parameters:
density: 0.5
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-OpenHermes-243K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-WizardLM-196K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Magpie-Align/Llama-3-8B-Ultrachat-200K
parameters:
density: 0.5
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
merge_method: dare_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-FFT-pt.2
models:
- model: Magpie-Align/Llama-3-8B-ShareGPT-112K
- model: Magpie-Align/Llama-3-8B-WildChat
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.0825, 0.33]
- model: Magpie-Align/Llama-3-8B-Tulu-330K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.0825, 0.33, 0.0825]
- model: Magpie-Align/Llama-3-8B-OpenHermes-243K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.0825, 0.33, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-WizardLM-196K
parameters:
gamma: 0.01
density: 0.9
weight: [0.0825, 0.33, 0.0825, 0.0825, 0.0825]
- model: Magpie-Align/Llama-3-8B-Ultrachat-200K
parameters:
gamma: 0.01
density: 0.9
weight: [0.33, 0.0825, 0.0825, 0.0825, 0.0825]
merge_method: breadcrumbs_ties
base_model: Magpie-Align/Llama-3-8B-ShareGPT-112K
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Eggs-and-Bread-FFT
models:
- model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
- model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.2
merge_method: slerp
base_model: Casual-Autopsy/Eggs-and-Bread-FFT-pt.1
parameters:
t:
- filter: self_attn
value: [0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5]
- filter: mlp
value: [0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5, 0.7, 0.3, 0.5, 0.3, 0.7, 0.5]
- value: 0.5
dtype: bfloat16
đ License
This model is under the llama3 license.
Phi 2 GGUF
Other
Phi-2 is a small yet powerful language model developed by Microsoft, featuring 2.7 billion parameters, focusing on efficient inference and high-quality text generation.
Large Language Model Supports Multiple Languages
P
TheBloke
41.5M
205
Roberta Large
MIT
A large English language model pre-trained with masked language modeling objectives, using improved BERT training methods
Large Language Model English
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT is a distilled version of the BERT base model, maintaining similar performance while being more lightweight and efficient, suitable for natural language processing tasks such as sequence classification and token classification.
Large Language Model English
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct is a multilingual large language model optimized for multilingual dialogue use cases, excelling in common industry benchmarks.
Large Language Model English
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa is a multilingual model pretrained on 2.5TB of filtered CommonCrawl data across 100 languages, using masked language modeling as the training objective.
Large Language Model Supports Multiple Languages
X
FacebookAI
9.6M
664
Roberta Base
MIT
An English pre-trained model based on Transformer architecture, trained on massive text through masked language modeling objectives, supporting text feature extraction and downstream task fine-tuning
Large Language Model English
R
FacebookAI
9.3M
488
Opt 125m
Other
OPT is an open pre-trained Transformer language model suite released by Meta AI, with parameter sizes ranging from 125 million to 175 billion, designed to match the performance of the GPT-3 series while promoting open research in large-scale language models.
Large Language Model English
O
facebook
6.3M
198
1
A pretrained model based on the transformers library, suitable for various NLP tasks
Large Language Model
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1 is Meta's multilingual large language model series, featuring 8B, 70B, and 405B parameter scales, supporting 8 languages and code generation, with optimized multilingual dialogue scenarios.
Large Language Model
Transformers Supports Multiple Languages

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
The T5 Base Version is a text-to-text Transformer model developed by Google with 220 million parameters, supporting multilingual NLP tasks.
Large Language Model Supports Multiple Languages
T
google-t5
5.4M
702
Featured Recommended AI Models
Š 2025AIbase