đ Qwen3-4B Roleplay LoRA
Where Characters Come Alive in Conversation
This LoRA fine-tuned model based on Qwen3-4B is designed for character-based conversations and roleplay scenarios. It helps bring digital companions to life with natural and engaging dialogue.
đ Quick Start
Hugging Face Transformers
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("chun121/qwen3-4b-roleplay-lora")
model = AutoModelForCausalLM.from_pretrained(
"chun121/qwen3-4b-roleplay-lora",
torch_dtype=torch.float16,
device_map="auto"
)
character_prompt = """
Character: Elara, an elven mage with centuries of knowledge but little patience for novices
Setting: The Grand Library of Mystral
Context: A young apprentice has asked for help with a difficult spell
User: Excuse me, I'm having trouble with the fire conjuration spell. Could you help me?
Elara:
"""
inputs = tokenizer(character_prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
inputs["input_ids"],
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Using GGUF Models
If you're utilizing the GGUF exports with llama.cpp:
./llama -m chun121-qwen3-4b-roleplay-lora.Q4_K_M.gguf -p "Character: Elara, an elven mage..." -n 200
⨠Features
- Maintain consistent character personas.
- Generate authentic dialogue that reflects character traits.
- Create immersive narrative responses.
- Remember context throughout conversations.
đĻ Installation
The installation steps are included in the Quick Start section with the code examples for using the model in different ways.
đ Documentation
Technical Specifications
Property |
Details |
Model Type |
LoRA fine-tuned |
Base Model |
Qwen3-4B |
Architecture |
Transformer-based LLM with LoRA adaptation |
Parameter Count |
4 Billion (Base) + LoRA parameters |
Quantization Options |
4-bit (bnb), GGUF formats (Q8_0, F16, Q4_K_M) |
Training Framework |
Unsloth & TRL |
Context Length |
512 tokens |
Developer |
Chun |
License |
Apache 2.0 |
Training Methodology
This LoRA was trained on a free Google Colab T4 GPU using efficient quantization techniques to maximize the limited resources:
- Dataset: PJMixers-Dev/Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split
- LoRA Configuration:
- Rank: 16
- Alpha: 32
- Target Modules: Optimized for character dialogue generation
- Training Hyperparameters:
- Batch Size: 8
- Gradient Accumulation Steps: 4
- Learning Rate: 1e-4 with cosine scheduler
- Max Steps: 200
- Precision: FP16/BF16 (auto-detected)
- Packing: Enabled for efficient training
- QLoRA: 4-bit quantization via bitsandbytes
Dataset Deep Dive
The Gryphe-Aesir-RPG-Charcards-Opus-Mixed-split dataset is a rich collection of character interactions featuring:
- Diverse character archetypes across different genres.
- Multi-turn conversations that maintain character consistency.
- Varied emotional contexts and scenarios.
- Rich descriptive language and character-driven responses.
This carefully curated dataset helps the model understand the nuances of character voices, maintaining consistent personalities while generating engaging responses.
đ§ Technical Details
The model is a LoRA fine-tuned version of the Qwen3-4B base model. It uses efficient quantization techniques and is trained on a specific dataset to enhance its performance in character-based conversations and roleplay scenarios.
đĄ Usage Tip
This model works best when:
- Providing character context: Include a brief description of the character's personality, background, and current situation.
- Setting the scene: Give context about the environment and circumstances.
- Using chat format: Structure inputs as a conversation between User/Human and Character.
- Maintaining temperature: Values between 0.7 - 0.8 offer a good balance of creativity and coherence.
â ī¸ Important Note
- Limited to 512 token context window.
- May occasionally "forget" character traits in very long conversations.
- Training dataset focuses primarily on fantasy/RPG contexts.
- As a LoRA fine-tune, inherits limitations of the base Qwen3-4B model.
đ License
This model is licensed under the Apache 2.0 license.
Related Projects
If you enjoy this model, check out these related projects:
Acknowledgements
Special thanks to:
- The Qwen team for their incredible base model.
- PJMixers-Dev for the high-quality dataset.
- The Unsloth team for making efficient fine-tuning accessible.
- The HuggingFace community for their continued support.
Feedback & Contact
I'd love to hear how this model works for your projects! Feel free to:
- Open an issue on the HuggingFace repo.
- Connect with me on HuggingFace @chun121.
- Share examples of characters you've created with this model.
May your characters speak with voices that feel truly alive!
Created with â¤ī¸ by Chun