🚀 ZeroXClem-Llama-3.1-8B-SpecialTitanFusion
A powerful fusion of Titan-level models, designed for enhanced roleplay, creativity, and intelligence.

🚀 Quick Start
ZeroXClem-Llama-3.1-8B-SpecialTitanFusion is a meticulously crafted model merge leveraging state-of-the-art transformer architectures. You can quickly start using it in different ways:
🔥 Ollama (Quick Inference)
You can run the model using Ollama for direct testing:
ollama run hf.co/ZeroXClem-Llama-3.1-8B-SpecialTitanFusion
🤗 Hugging Face Transformers (Python)
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
model_name = "ZeroXClem-Llama-3.1-8B-SpecialTitanFusion"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto"
)
text_generator = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
device_map="auto"
)
prompt = "Describe the significance of AI ethics in modern technology."
outputs = text_generator(
prompt,
max_new_tokens=200,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(outputs[0]["generated_text"])
✨ Features
- 🔹 Highly dynamic writing – Perfect for storytelling, world-building, and creative applications.
- 🔹 Refined roleplay abilities – Enhanced persona handling, deep emotional responses, and immersive dialogue generation.
- 🔹 Better structured recall – Improved consistency across large-context conversations.
- 🔹 Balanced & non-restrictive responses – Adaptable across different use cases.
📦 Installation
This model is available on Hugging Face. You can install the necessary libraries to use it as shown in the Quick Start section.
📚 Documentation
📌 Overview
ZeroXClem-Llama-3.1-8B-SpecialTitanFusion is a meticulously crafted model merge leveraging state-of-the-art transformer architectures. Using mergekit
, we combined multiple high-performance Llama-3.1 models to enhance context retention, creativity, and nuanced text generation.
This model is based on kromeurus/L3.1-Siithamo-v0.4-8B, with carefully selected models merged using the model_stock
method.
🛠 Merge Details
🔄 Merge Method: model_stock
This model was merged using the model_stock method, ensuring a balanced and optimized blend of all contributing architectures.
📑 Models Merged
The following models contributed to this fusion:
⚙ Configuration
name: ZeroXClem-Llama-3.1-8B-SpecialTitanFusion
base_model: kromeurus/L3.1-Siithamo-v0.4-8B
dtype: bfloat16
merge_method: model_stock
models:
- model: bunnycore/Llama-3.1-8B-TitanFusion-Test
- model: vicgalle/Roleplay-Hermes-3-Llama-3.1-8B
- model: vicgalle/Humanish-Roleplay-Llama-3.1-8B
- model: bunnycore/Llama-3.1-8B-TitanFusion-Mix
tokenizer_source: kromeurus/L3.1-Siithamo-v0.4-8B
🔧 Recommended Usage
📜 Prompting Style
For best results, use system prompts similar to Llama-3.1 Instruct.
Example system message:
Think step by step with a logical reasoning and intellectual sense before you provide any response.
For enhanced creativity in roleplay, try:
### Instruction:
You are an advanced roleplaying assistant. Maintain deep character consistency and immersive storytelling.
🏗 Model Settings
For optimal output quality, use the following settings:
Temperature: 1.2
Min P: 0.1
Repeat Penalty: 1.05
Repeat Penalty Tokens: 256
Smooth Sampling: 0.18
🔥 Disclaimer
- 🔹 Use responsibly!
This model follows Meta’s Llama-3.1 Community License Agreement. It is an uncensored model, meaning that alignment should be implemented based on individual use cases.
- 🔹 You are responsible for the content you generate.
Please ensure compliance with ethical AI guidelines when deploying this model in production environments.
💬 Feedback & Contributions
If you have suggestions or improvements, feel free to open a discussion on Hugging Face! Let's continue improving the Llama-3.1 merging meta-game! 🚀
Detailed results can be found here
Property |
Details |
Model Type |
ZeroXClem-Llama-3.1-8B-SpecialTitanFusion |
Training Data |
Not provided |
Metric |
Value |
Avg. |
29.23 |
IFEval (0-Shot) |
74.02 |
BBH (3-Shot) |
34.82 |
MATH Lvl 5 (4-Shot) |
23.34 |
GPQA (0-shot) |
6.60 |
MuSR (0-shot) |
7.49 |
MMLU-PRO (5-shot) |
29.12 |
📄 License
This model is licensed under Apache-2.0.