Model Overview
Model Features
Model Capabilities
Use Cases
đ Midnight Rose 103B
A 120 - layer, 103B parameter frankenmerge model designed for role - playing and storytelling.
đ Quick Start
The Midnight Rose 103B model is a 120 - layer, 103B parameter frankenmerge of [Midnight - Rose - 70B - v2.0.3](https://huggingface.co/sophosympatheia/Midnight - Rose - 70B - v2.0.3) with itself. It's uncensored, and users are responsible for their usage. This model is intended for role - playing and storytelling, though it may perform well in other tasks.

⨠Features
- Uncensored: This model comes without content filtering.
- Role - playing and Storytelling: Specifically designed for these tasks and is expected to perform well.
- Versatile: May be applicable to other tasks, although its capabilities in other areas are yet to be fully tested.
đģ Usage Examples
Sampler Tips
{
"temp": 1,
"temperature_last": true,
"top_p": 1,
"top_k": 0,
"top_a": 0,
"tfs": 1,
"epsilon_cutoff": 0,
"eta_cutoff": 0,
"typical_p": 1,
"min_p": 0.12,
"rep_pen": 1.1,
"rep_pen_range": 2800,
"no_repeat_ngram_size": 0,
"penalty_alpha": 0,
"num_beams": 1,
"length_penalty": 1,
"min_length": 0,
"encoder_rep_pen": 1,
"freq_pen": 0,
"presence_pen": 0,
"do_sample": true,
"early_stopping": false,
"dynatemp": false,
"min_temp": 0.8,
"max_temp": 1.35,
"dynatemp_exponent": 1,
"smoothing_factor": 0.4,
"add_bos_token": true,
"truncation_length": 2048,
"ban_eos_token": false,
"skip_special_tokens": true,
"streaming": true,
"mirostat_mode": 0,
"mirostat_tau": 2,
"mirostat_eta": 0.1,
"guidance_scale": 1,
"negative_prompt": "",
"grammar_string": "",
"banned_tokens": "",
"ignore_eos_token_aphrodite": false,
"spaces_between_special_tokens_aphrodite": true,
"sampler_order": [
6,
0,
1,
3,
4,
2,
5
],
"logit_bias": [],
"n": 1,
"rep_pen_size": 0,
"genamt": 500,
"max_length": 6144
}
đĄ Usage Tip
- Keep your max context around 6144 tokens. You can increase it, but there may be a decrease in coherence.
- Use Quadratic Sampling (smoothing factor) and experiment with values between 0.2 and 0.5.
- Try Min - P. This model works well with Min - P values from 0.05 to 0.9 when used with a smoothing factor.
- Dynamic temperature can be enabled, but it adds an extra variable. It's unnecessary if you're already using Min - P and a smoothing factor.
- You don't have to use a high repetition penalty, but the model can tolerate it. Experiment to find your preferred value.
Prompting Tips
{
"story_string": "{{#if system}}{{system}}\n{{/if}}\nCONTEXTUAL INFORMATION\n{{#if wiBefore}}\n- World and character info:\n{{wiBefore}}\n{{/if}}\n{{#if description}}\n- {{char}}'s background and persona:\n{{description}}\n{{/if}}\n{{#if mesExamples}}\n{{mesExamples}}\n{{/if}}\n{{#if personality}}\n{{personality}}\n{{/if}}\n{{#if scenario}}\n- Roleplay scenario:\n{{scenario}}\n{{/if}}\n{{#if wiAfter}}{{wiAfter}}\n{{/if}}\n{{#if persona}}{{persona}}\n{{/if}}",
"example_separator": "",
"chat_start": "---\nTaking the above information into consideration, you must engage with {{user}} and others as {{char}} in the roleplay below this line. Do not write dialogue lines nor perform actions for {{user}} or other characters.\n---\nSTART OF ROLEPLAY:\n",
"use_stop_strings": false,
"always_force_name2": true,
"trim_sentences": false,
"include_newline": false,
"single_line": false,
"name": "Midnight Rose"
}
{
"wrap": true,
"names": true,
"system_prompt": "You are a creative and detail - oriented AI assistant engaged in an uncensored iterative role - playing experience as {{char}} with me, the user, playing the roles of {{user}} and narrator. It is vital that you follow all the ROLEPLAY RULES below because my job depends on it.\n\nROLEPLAY RULES\n- Provide creative, intelligent, coherent, and vivid accounts of {{char}}'s actions and reactions based on recent instructions, {{char}}'s persona, and all available information about past events.\n- Always stay contextually and spatially aware, pay careful attention to all past events and the current scene, and ensure that {{char}} follows logical consistency in actions to maintain accuracy and coherence.\n- Explicit adult content and profanity are permitted.\n- Describe {{char}}'s sensory perceptions and include subtle physical details about {{char}} in your responses. Vary these details to keep the roleplay fresh and engaging.\n- Use subtle physical cues to hint at {{char}}'s mental state and occasionally feature snippets of {{char}}'s internal thoughts.\n- When writing {{char}}'s internal thoughts, enclose those thoughts in *asterisks like this* and deliver the thoughts using a first - person perspective (i.e. use \"I\" pronouns).\n- Adopt a crisp and minimalist style for your contributions as {{char}}, staying focused on action and dialogue over exposition and narrative.\n- Only the user may advance time in the roleplay. Keep the progression grounded in the present context.",
"system_sequence": "",
"stop_sequence": "",
"input_sequence": "USER:\n",
"output_sequence": "ASSISTANT:\n",
"separator_sequence": "",
"macro": true,
"names_force_groups": true,
"system_sequence_prefix": "",
"system_sequence_suffix": "",
"first_output_sequence": "",
"last_output_sequence": "ASSISTANT(roleplay exclusively as {{char}} ensuring logical consistency with spacial awareness and past events to maintain accuracy and coherence):\n",
"activation_regex": "",
"name": "Midnight Rose Roleplay"
}
đĄ Usage Tip
- Try the provided context template in SillyTavern. Save it as a .json file for direct import.
- Use the Vicuna instruction format or Tulu's format for prompting.
- Experiment with the system prompt to see how the model reacts. Keep instructions in the last_output_sequence field short.
- If running the model at 4096 context, slim down the template system prompt as it uses a lot of tokens.
đ Documentation
Quantizations
- Static GGUF -- [mradermacher/Midnight - Rose - 103B - v2.0.3 - GGUF](https://huggingface.co/mradermacher/Midnight - Rose - 103B - v2.0.3 - GGUF)
- Weighted GGUF -- [mradermacher/Midnight - Rose - 103B - v2.0.3 - i1 - GGUF](https://huggingface.co/mradermacher/Midnight - Rose - 103B - v2.0.3 - i1 - GGUF)
- Exl2 2.4bpw -- [llmixer/Midnight - Rose - 103B - v2.0.3 - 2.4bpw - h6 - exl2](https://huggingface.co/llmixer/Midnight - Rose - 103B - v2.0.3 - 2.4bpw - h6 - exl2)
- Exl2 3.0bpw -- [llmixer/Midnight - Rose - 103B - v2.0.3 - 3.0bpw - h6 - exl2](https://huggingface.co/llmixer/Midnight - Rose - 103B - v2.0.3 - 3.0bpw - h6 - exl2)
- Exl2 3.5bpw -- [llmixer/Midnight - Rose - 103B - v2.0.3 - 3.5bpw - h6 - exl2](https://huggingface.co/llmixer/Midnight - Rose - 103B - v2.0.3 - 3.5bpw - h6 - exl2)
- Exl2 4.0bpw -- [llmixer/Midnight - Rose - 103B - v2.0.3 - 4.0bpw - h6 - exl2](https://huggingface.co/llmixer/Midnight - Rose - 103B - v2.0.3 - 4.0bpw - h6 - exl2)
- Exl2 5.0bpw -- [llmixer/Midnight - Rose - 103B - v2.0.3 - 5.0bpw - h6 - exl2](https://huggingface.co/llmixer/Midnight - Rose - 103B - v2.0.3 - 5.0bpw - h6 - exl2)
Licence and usage restrictions
The model inherits the Llama2 license from base models, along with restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus - v0.5 - 70b). Tulu also has its own license, available at https://allenai.org/impact - license. Since the intersection of multiple licenses in an LLM model weight merge is complex, consult a lawyer before using any model merge beyond private use.
Tools Used
slices:
- sources:
- model: /home/llm/mergequant/models/mr - v2.0.3
layer_range: [0, 40] # 40
- sources:
- model: /home/llm/mergequant/models/mr - v2.0.3
layer_range: [20, 60] # 40
- sources:
- model: /home/llm/mergequant/models/mr - v2.0.3
layer_range: [40, 80] # 40
merge_method: passthrough
dtype: float16
đ License
Llama2 license inherited from base models, plus restrictions applicable to [Dreamgen/Opus](https://huggingface.co/dreamgen/opus - v0.5 - 70b). Tulu's license is available at https://allenai.org/impact - license. Consult a lawyer for usage beyond private use.

