đ Loyal-Toppy-Bruins-Maid-7B-DARE
This repository hosts FP16 files for Loyal-Toppy-Bruins-Maid-7B, a 7B model designed for engaging role - playing (RP) with strong character card adherence and high intelligence.

đ Quick Start
This README provides detailed information about the Loyal-Toppy-Bruins-Maid-7B model, including its description, merging details, and prompt templates.
⨠Features
- Engaging RP: Capable of having engaging role - playing interactions while strictly adhering to character cards.
- Strong Foundation: Built upon high - performing models like [Starling - LM - 7B - alpha](https://huggingface.co/berkeley - nest/Starling - LM - 7B - alpha), which has shown excellent performance in the LMSYS Chatbot Arena, even outperforming GPT - 3.5 - Turbo - 1106.
- Diverse Data Sources: Incorporates data from multiple models with different datasets, such as PIPPA, rpbuild, and LimaRP.
- Advanced Merging Method: Merged using the DARE ties method with specific weight and density settings.
đ Documentation
Description
This repository hosts FP16 files for Loyal - Toppy - Bruins - Maid - 7B, a 7B model aimed at having engaging RP with solid character card adherence and being a smart cookie at the same time.
Its foundation is [Starling - LM - 7B - alpha](https://huggingface.co/berkeley - nest/Starling - LM - 7B - alpha), notable for its performance in the LMSYS Chatbot Arena, even surpassing GPT - 3.5 - Turbo - 1106. The model incorporates [rwitz/go - bruins - v2](https://huggingface.co/rwitz/go - bruins - v2), a [Q - bert/MetaMath - Cybertron - Starling](https://huggingface.co/Q - bert/MetaMath - Cybertron - Starling) derivative with Alpaca RP data tuning.
The other foundational model is [chargoddard/loyal - piano - m7](https://huggingface.co/chargoddard/loyal - piano - m7), chosen for its strong RP performance and Alpaca format training, with a diverse dataset including PIPPA, rpbuild, and LimaRP.
[Undi95/Toppy - M - 7B](https://huggingface.co/Undi95/Toppy - M - 7B), known for its creativity, brings in useful RP data from various sources. It ranks first among 7B models on OpenRouter for a good reason.
[NeverSleep/Noromaid - 7b - v0.1.1](https://huggingface.co/NeverSleep/Noromaid - 7b - v0.1.1), a Mistral finetune with unique RP data not present in other models, was also added for bringing in a unique RP dataset and being a well - regarded RP model.
The models were merged using the DARE ties method, with a targeted 1.2 absolute weight and high density (0.5 - 0.6), as discussed in the MergeKit GitHub Repo.
Currently, this model ranks at the top of my personal RP unit test benchmark and scored a very solid 20 on [lilblam's LLM Logic Test](https://docs.google.com/spreadsheets/d/1NgHDxbVWJFolq8bLvLkuPWKC7i_R6I6W/edit#gid = 1278290632). My first impressions of it for RPing are very good but, admittedly, this model came out of the oven today so I haven't played it with it too much đ
The sauce
models: # Top - Loyal - Bruins - Maid - DARE - 7B_v2
- model: mistralai/Mistral - 7B - v0.1
# no parameters necessary for base model
- model: rwitz/go - bruins - v2 # MetamathCybertronStarling base
parameters:
weight: 0.5
density: 0.6
- model: chargoddard/loyal - piano - m7 # Pull in some PIPPA/LimaRP/Orca/rpguild
parameters:
weight: 0.5
density: 0.6
- model: Undi95/Toppy - M - 7B
parameters:
weight: 0.1
density: 0.5
- model: NeverSleep/Noromaid - 7b - v0.1.1
parameters:
weight: 0.1
density: 0.5
merge_method: dare_ties
base_model: mistralai/Mistral - 7B - v0.1
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
Prompt template: Custom format, or Alpaca
Custom format:
I found the best SillyTavern results from using the Noromaid template.
SillyTavern config files: Context, Instruct.
Otherwise, I tried to ensure that all of the underlying merged models were Alpaca favored.
Alpaca:
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{prompt}
### Response:
đ License
This project is licensed under the CC - BY - NC - 4.0 license.