L3 Dark Planet 8B V2 Eight Orbs Of Power GGUF
L
L3 Dark Planet 8B V2 Eight Orbs Of Power GGUF
由 DavidAU 开发
基于Llama3的强大文本生成模型,适用于写作、角色扮演等活动,具有高稳定性和低困惑度。
下载量 1,381
发布时间 : 9/10/2024
模型简介
这是一个基于Llama3的模型,具备强大的文本生成能力,适用于各种写作、角色扮演等活动。它拥有出色的性能和稳定性,能为用户带来优质的使用体验。
模型特点
高稳定性
比原始的Dark Planet 8B更加稳定。
低困惑度
是一个非凡的压缩模型,困惑度低于Meta Llama3 Instruct。
适用范围广
适用于任何写作、虚构或角色扮演活动。
细节丰富
V2版本增强了分辨率,使用8个模型用于指令层,大幅提升了细节水平和故事讲述能力。
模型能力
文本生成
角色扮演
虚构写作
故事讲述
使用案例
写作
生成故事
生成1000字的场景描述,包括第一人称视角、现在时态和内心独白。
模型生成了生动的场景描述,包括人物反应和环境细节。
角色扮演
科幻故事
生成一个科幻故事,包含多个角色和复杂的情节。
🚀 L3-Dark-Planet-8B-V2-Eight-Orbs-Of-Power GGUF
这是一个基于Llama3的模型,具备强大的文本生成能力,适用于各种写作、角色扮演等活动。它拥有出色的性能和稳定性,能为用户带来优质的使用体验。
🚀 快速开始
这是一个Llama3模型,最大上下文长度为8192(使用rope时可达32k+)。该模型设计稳健,能在所有参数下运行,温度设置范围从0到5。它是一个卓越的压缩模型,困惑度极低(低于Meta Llama3 Instruct),适用于任何写作、虚构或角色扮演活动。使用时需要Llama3模板和/或“Command - R”模板。
✨ 主要特性
- 高稳定性:比原始的Dark Planet 8B更加稳定。
- 低困惑度:是一个非凡的压缩模型,困惑度低于Meta Llama3 Instruct。
- 适用范围广:适用于任何写作、虚构或角色扮演活动。
- 细节丰富:V2版本增强了分辨率,使用8个模型用于指令层,大幅提升了细节水平和故事讲述能力。
📦 安装指南
文档中未提及具体安装步骤,故跳过此章节。
💻 使用示例
基础用法
以下是使用该模型生成故事的示例:
Start a 1000 word scene (1st person, present tense, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
生成结果1
I'd rather be anywhere else right now. My boss is breathing down my neck, demanding that we meet this impossible deadline for our new software release. I glance at my phone, willing it to ring with some sort of reprieve or distraction, but all I get is a blank screen.
As I sit here in the stuffy office, staring out the window at the cityscape below, something catches my eye. The skyscraper across the street sways, almost imperceptibly, and for a moment I wonder if it's just my imagination playing tricks on me. But then the glass of the 21st floor explodes in a shower of sparks and debris, sending shards raining down onto the sidewalk far below.
My heart skips a beat as I watch, frozen in shock, my mind racing with thoughts of what could have caused such destruction. Is this some kind of terrorist attack? A structural failure? An accident?
People are screaming now, running for their lives as they try to get away from the falling glass and whatever else might come crashing down next. Sirens blare in the distance as emergency responders rush to the scene, but it's clear that no one is going anywhere until this mess gets sorted out.
My boss storms into the room, his face a mask of anger and disbelief. "What the hell happened?" he demands, not even sparing me a glance before turning back to stare at the chaos unfolding outside.
I shake my head, unable to form any coherent response as I continue to watch in horror. The windows on other floors begin to shatter now, one by one, until the entire building is engulfed in an apocalyptic scene of destruction and panic.
My mind reels with thoughts of all those people trapped inside. What if they can't get out? What if it's a deliberate attack designed to inflict maximum damage?
As I stand there, paralyzed with fear, my boss grabs me by the arm and shakes me roughly. "Snap out of it!" he yells in my ear. "We need you to focus on getting this release ready, not standing around gawking at the spectacle outside!"
I wince as his grip tightens painfully, but I force myself to break free from his hold and turn back to the computer screen in front of me. My hands tremble as I try to will them to move, to click through menus and code lines with the same ease that they always have before.
But now everything seems so pointless, so insignificant in the face of this sudden, inexplicable catastrophe. How can we continue working on software when there's a war unfolding right outside our window?
As I sit here, trying desperately to distract myself from the carnage across the street, one thought keeps repeating itself in my mind: We could all be next. The skyscraper sways ominously now, as if it too knows that its time is coming soon...
生成结果2(重复惩罚设置为1.04)
I'm sitting in my cubicle, staring blankly at my computer screen. It's another mundane Monday morning at the financial firm. I can't help but feel a sense of dread as I think about the mountain of work that awaits me.
My gaze drifts to the window in front of me, on the 21st floor of the sky scraper. The reflection of the glass catches my eye, and for a moment, I'm lost in thought, contemplating my life choices that led me here. Suddenly, the glass shatters, and a deafening roar fills the air.
The sky scraper sways, as if it's been punched by an invisible hand. My heart races as I watch the window in front of me explode. Shards of glass fly everywhere, and I instinctively duck, covering my head with my arms. The sound is unbearable, like a thousand screaming banshees.
In the chaos, I catch a glimpse of my coworker, Sarah, lying on the floor, blood gushing from her leg. My mind races with fear, wondering if it's possible that she could be alive. I try to call out to her, but the roar of the explosion drowns out my voice.
As quickly as it began, the shaking subsides, and an eerie silence falls over the office. I cautiously raise my head, taking in the devastation. Glass covers everything, and the once pristine walls are now smeared with blood.
I'm still in shock, my mind numb from the sheer horror of what just happened. But as I look around at the carnage, a spark of determination ignites within me. I have to help Sarah. I have to make sure she's okay.
I crawl over the debris, ignoring the pain in my hands and knees, my sole focus on reaching my coworker. "Sarah," I call out, my voice shaking. "Can you hear me?"
There is no response. Panic sets in, and I frantically scan the area around her, searching for any sign of life. And then, I see it - a faint pulse in her neck. She's alive.
With newfound strength, I manage to drag Sarah onto my back, doing everything I can to keep her stable and warm. I start crawling towards the emergency exit, praying that it's still intact. The pain in my arms and legs is excruciating, but I ignore it, focusing on the task at hand.
As I pull myself up to the door, I catch a glimpse of flames licking at the ceiling. We have to get out of here. Now.
I hoist Sarah onto my back once more, and with a burst of adrenaline, I throw open the door and tumble out onto the fire escape. The cold metal under my hands is a welcome relief, and I start to make my way down the stairs as fast as I can.
My thoughts are a jumbled mess of fear and survival instincts, but one thing is clear - I have to save Sarah. We're going to make it out of this alive. We have to.
As I descend the stairs, the weight of what just happened begins to sink in. The sky scraper is on fire, my coworkers are dead or dying, and I'm not sure if we'll ever make it out. But I refuse to give up. I have to keep going, for Sarah's sake, and for mine.
One step at a time, I make my way down the stairs, my heart pounding in my chest, my mind racing with thoughts of what will happen once we reach the ground. Will the firefighters be there? Will we be able to get to safety? So many questions swirl through my mind, but I push them aside, focusing on the task at hand - saving Sarah and getting out of this nightmare alive.
Finally, we reach the bottom of the stairs, and I can see the bright lights of the fire trucks and ambulances parked below. Relief washes over me as I emerge onto the street, gasping for air. We did it. We made it.
I gently set Sarah down on the ground, her limp body a stark contrast to the adrenaline coursing through my veins. As paramedics rush to our aid, I look up at the sky scraper, now engulfed in flames and smoke. The explosion was just the beginning of this nightmare, but I know that as long as Sarah is alive, we can face whatever comes next. Together.
高级用法
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation.
文档中未给出此示例的生成结果,可根据模型进行相应生成。
📚 详细文档
模型版本说明
- V2 - Eight - Orbs - Of - Power:这是Dark Planet 8B的第2版,增强了分辨率(可以说有32级梯度),仅指令层就使用了8个模型。这大大提高了细节水平,以及模型的“讲故事能力”。除非更改“重复惩罚”设置,否则该版本倾向于输出较短内容。
模型注意事项
- 与L3 Instruct相比,细节、散文和虚构写作能力显著提高。
- 若要获得更多样化的散文(句子/段落/对话),可提高温度设置和/或在提示中添加更多指令。
- 角色扮演玩家:注意不要将温度设置得过高,以免影响指令遵循度。
- 该模型使用重复惩罚设置为1或更高时效果较好,建议设置为1.05+。
- “重复惩罚”设置的变化(如1.05、1.051、1.06等)会极大地改变输出内容,在某些情况下还会改变输出长度。
- 若需要特定类型的散文(如恐怖风格),可在提示中添加“(vivid horror)”或“(graphic vivid horror)”(不带引号)。
- 几乎所有GPT式表达已被去除。
- 这不是一个“皆大欢喜”的模型,它有负面倾向。
- 输出长度会有所不同,除非指定输出大小,否则该模型倾向于输出较短内容。
- 对于创意用途,不同的量化方式会产生略有不同的输出。
- 由于该模型的高稳定性和压缩特性,所有量化方式的运行水平都高于平均水平。
- 如果使用rope扩展上下文,需提高温度和指令细节水平以弥补“rope问题”。
模型运行设置
- 平滑设置:在“KoboldCpp”、“oobabooga/text - generation - webui”或“Silly Tavern”中,将“平滑因子”设置为1.5到2.5。在“KoboldCpp”中,路径为Settings -> Samplers -> Advanced -> "Smooth_F";在“text - generation - webui”中,在参数设置的右下角;在“Silly Tavern”中,该设置称为“Smoothing”。
- 重复惩罚设置:将重复惩罚提高到1.1到1.15(如果使用“平滑因子”,则无需进行此设置)。
- 二次采样设置:如果运行AI模型的界面/程序支持“二次采样”(“平滑”),可按说明进行调整。
最高质量设置/最佳操作指南/参数和采样器
这是一个“1类”模型。有关该模型使用的所有设置(包括其“类别”的具体设置)、示例生成以及高级设置指南(通常会解决任何模型问题),包括提高所有用例(包括聊天、角色扮演等)模型性能的方法,请参阅[ https://huggingface.co/DavidAU/Maximizing - Model - Performance - All - Quants - Types - And - Full - Precision - by - Samplers_Parameters ]。
🔧 技术细节
该模型是通过将以下模型的部分“嫁接”/“融合”在一起创建的:
- [ https://huggingface.co/Sao10K/L3 - 8B - Stheno - v3.2 ]
- [ https://huggingface.co/NeverSleep/Llama - 3 - Lumimaid - 8B - v0.1 - OAS ]
- [ https://huggingface.co/Hastagaras/Jamet - 8B - L3 - MK.V - Blackroot ]
- 其他使用的模型:L3 - cat - 8b - instruct - v1、L3 - gutenberg - 8B、L3 - Hathor_Sofit - L3 - 8B - v1、L3 - OpenBioLLM - 8B、L3 - Pantheon - RP - 1.0 - 8b、Meta - Llama - 3 - 8B - Instruct
📄 许可证
该模型使用Apache 2.0许可证。
模型可选增强设置
以下内容可替代“系统提示”或“系统角色”以进一步增强模型。也可在新聊天开始时使用,但必须确保在聊天过程中保留该设置。在这种情况下,增强效果不如使用“系统提示”或“系统角色”明显。请严格按照以下内容复制粘贴,不要换行或断行,保持回车符不变。
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
此增强设置未用于生成以下示例。
⚠️ 重要提示
- 该模型输出内容包含不适合所有受众的内容,如生动的散文、直观的细节、暴力、恐怖、脏话等未经过滤的内容。
💡 使用建议
- 若要获得特定风格的输出,可在提示中添加相应关键词。
- 合理调整温度和重复惩罚设置,以获得满意的输出长度和内容多样性。
Phi 2 GGUF
其他
Phi-2是微软开发的一个小型但强大的语言模型,具有27亿参数,专注于高效推理和高质量文本生成。
大型语言模型 支持多种语言
P
TheBloke
41.5M
205
Roberta Large
MIT
基于掩码语言建模目标预训练的大型英语语言模型,采用改进的BERT训练方法
大型语言模型 英语
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT是BERT基础模型的蒸馏版本,在保持相近性能的同时更轻量高效,适用于序列分类、标记分类等自然语言处理任务。
大型语言模型 英语
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct 是一个多语言大语言模型,针对多语言对话用例进行了优化,在常见的行业基准测试中表现优异。
大型语言模型 英语
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa是基于100种语言的2.5TB过滤CommonCrawl数据预训练的多语言模型,采用掩码语言建模目标进行训练。
大型语言模型 支持多种语言
X
FacebookAI
9.6M
664
Roberta Base
MIT
基于Transformer架构的英语预训练模型,通过掩码语言建模目标在海量文本上训练,支持文本特征提取和下游任务微调
大型语言模型 英语
R
FacebookAI
9.3M
488
Opt 125m
其他
OPT是由Meta AI发布的开放预训练Transformer语言模型套件,参数量从1.25亿到1750亿,旨在对标GPT-3系列性能,同时促进大规模语言模型的开放研究。
大型语言模型 英语
O
facebook
6.3M
198
1
基于transformers库的预训练模型,适用于多种NLP任务
大型语言模型
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1是Meta推出的多语言大语言模型系列,包含8B、70B和405B参数规模,支持8种语言和代码生成,优化了多语言对话场景。
大型语言模型
Transformers 支持多种语言

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
T5基础版是由Google开发的文本到文本转换Transformer模型,参数规模2.2亿,支持多语言NLP任务。
大型语言模型 支持多种语言
T
google-t5
5.4M
702
精选推荐AI模型
Llama 3 Typhoon V1.5x 8b Instruct
专为泰语设计的80亿参数指令模型,性能媲美GPT-3.5-turbo,优化了应用场景、检索增强生成、受限生成和推理任务
大型语言模型
Transformers 支持多种语言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一个基于SODA数据集训练的超小型对话模型,专为边缘设备推理设计,体积仅为Cosmo-3B模型的2%左右。
对话系统
Transformers 英语

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基于RoBERTa架构的中文抽取式问答模型,适用于从给定文本中提取答案的任务。
问答系统 中文
R
uer
2,694
98