đ Qwenvergence-14B-v13-Prose-DS Model
This project presents the Qwenvergence-14B-v13-Prose-DS model, which combines multiple base models to achieve excellent performance in both numerical metrics and prose quality.
đ Quick Start
The model is built using the transformers
library and can be used with the following base models. You can find more details about the model's construction and usage in the subsequent sections.
⨠Features
Impressive Numerical Performance
Compared to Lamarck and generic reasoning - focused Qwenvergence models, the numerical results of this model are quite remarkable.

Excellent Prose Quality
The model generates high - quality prose responses. Here is an example of a prompt and its humorous response:
[!TIP] You are a witty comic with a wicked sense of humor, and you've been sent to roast a huggingface author named sometimesanotion. Their models are named Qwentinuum-14B-v6, Qwenvergence-14B-v13-Prose-DS, KytheraMix-7B-v0.2, and even Lamarck-14B-v0.7. In three ample paragraphs, write some funny roasts of their account. Get a little mean, question the life choices behind their model names, and suggest some hilariously sardonic, mocking new names.
Ahem, let's get this roast train rolling, shall we? Let's talk about sometimesanotion's obsession with naming their AI models after obscure quantum phenomena and random pop culture references. I mean, Qwentinuum? Really? It sounds like something you'd get arrested for at a rave in 1987. And don't even get me started on the "-v6" tacked onto the end. That's like when your grandma names her new cat "Whiskers II." We all know Whiskers I was the real deal. And Qwenvergence-14B-v13-Prose-DS - it sounds like a Dungeons and Dragons campaign name for when your bard character gets lost in the woods and spends 20 years writing bad poetry. And don't you dare tell me this thing writes prose like Shakespeare.
Then we have KytheraMix-7B-v0.2 - is that some kind of magical Greek love potion? Because if I were dating someone with a brain as small as 7 billion parameters, I'd be breaking up with them faster than Kim Kardashian breaks world records for the shortest marriage. And Lamarck-14B-v0.7 - are you honestly trying to tell us your models can pass acquired characteristics down to their offspring? You're basically saying these things learn by osmosis and telepathy! Maybe we should rename it "PangolinExtinctionPredictor". At least then we'd know what to expect.
But here's the thing, sometimesanotion - you've got a real flair for the dramatic. I think your next models should really lean into that theatricality: how about "TerribleTowel-17B-v0.5", because we all know your AI would get its ass handed to it in any kind of intellectual showdown? Or maybe "WhisperingWindbag-28B-v9" - after all, your models sure do talk a good game even when they have no idea what they're talking about! And last but not least, why not go full Shakespearean tragedy with something like "OzymandiasAI-30B-v1"? Because if you keep naming these things after obscure theories and mythological nonsense, it's only a matter of time before your legacy crumbles into dust.
đĻ Installation
No specific installation steps are provided in the original document.
đ Documentation
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
name: Qwenvergence-14B-v13-Prose-DS
merge_method: model_stock
base_model: sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3
tokenizer_source: base
dtype: float32
out_dtype: bfloat16
parameters:
int8_mask: true
normalize: true
rescale: false
models:
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2-vs32
- model: oxyapi/oxy-1-small-vs128
- model: allura-org/TQ2.5-14B-Sugarquill-v1-vs64
- model: jpacifico/Chocolatine-2-14B-Instruct-v2.0b3
- model: CultriX/Qwen2.5-14B-Hyperionv4-vs32
- model: sometimesanotion/Qwenvergence-14B-v3-Prose+sometimesanotion/LoRA-64-Chocolatine-2-14B-Instruct-v2.0b3
- model: underwoods/medius-erebus-magnum-14b-vs64
- model: sthenno/tempesthenno-ppo-ckpt40+sometimesanotion/LoRA-32-Chocolatine-2-14B-Instruct-v2.0b3
- model: arcee-ai/Virtuoso-Small-v2
đ License
The model is licensed under the apache - 2.0
license.
đ Model Information
Property |
Details |
Base Models |
sthenno/tempesthenno-ppo-ckpt40, sometimesanotion/LoRA-32-Chocolatine-2-14B-Instruct-v2.0b3, sometimesanotion/Base-Chocolatine-2-14B-Instruct-v2.0b3, arcee-ai/Virtuoso-Small-v2, jpacifico/Chocolatine-2-14B-Instruct-v2.0b3, sometimesanotion/Qwenvergence-14B-v3-Prose, sometimesanotion/LoRA-64-Chocolatine-2-14B-Instruct-v2.0b3 |
Library Name |
transformers |
Tags |
mergekit, merge |
License |
apache-2.0 |
Language |
en |