L3 Dark Planet 8B V2 Eight Orbs Of Power GGUF
L
L3 Dark Planet 8B V2 Eight Orbs Of Power GGUF
由DavidAU開發
基於Llama3的強大文本生成模型,適用於寫作、角色扮演等活動,具有高穩定性和低困惑度。
下載量 1,381
發布時間 : 9/10/2024
模型概述
這是一個基於Llama3的模型,具備強大的文本生成能力,適用於各種寫作、角色扮演等活動。它擁有出色的性能和穩定性,能為用戶帶來優質的使用體驗。
模型特點
高穩定性
比原始的Dark Planet 8B更加穩定。
低困惑度
是一個非凡的壓縮模型,困惑度低於Meta Llama3 Instruct。
適用範圍廣
適用於任何寫作、虛構或角色扮演活動。
細節豐富
V2版本增強了分辨率,使用8個模型用於指令層,大幅提升了細節水平和故事講述能力。
模型能力
文本生成
角色扮演
虛構寫作
故事講述
使用案例
寫作
生成故事
生成1000字的場景描述,包括第一人稱視角、現在時態和內心獨白。
模型生成了生動的場景描述,包括人物反應和環境細節。
角色扮演
科幻故事
生成一個科幻故事,包含多個角色和複雜的情節。
🚀 L3-Dark-Planet-8B-V2-Eight-Orbs-Of-Power GGUF
這是一個基於Llama3的模型,具備強大的文本生成能力,適用於各種寫作、角色扮演等活動。它擁有出色的性能和穩定性,能為用戶帶來優質的使用體驗。
🚀 快速開始
這是一個Llama3模型,最大上下文長度為8192(使用rope時可達32k+)。該模型設計穩健,能在所有參數下運行,溫度設置範圍從0到5。它是一個卓越的壓縮模型,困惑度極低(低於Meta Llama3 Instruct),適用於任何寫作、虛構或角色扮演活動。使用時需要Llama3模板和/或“Command - R”模板。
✨ 主要特性
- 高穩定性:比原始的Dark Planet 8B更加穩定。
- 低困惑度:是一個非凡的壓縮模型,困惑度低於Meta Llama3 Instruct。
- 適用範圍廣:適用於任何寫作、虛構或角色扮演活動。
- 細節豐富:V2版本增強了分辨率,使用8個模型用於指令層,大幅提升了細節水平和故事講述能力。
📦 安裝指南
文檔中未提及具體安裝步驟,故跳過此章節。
💻 使用示例
基礎用法
以下是使用該模型生成故事的示例:
Start a 1000 word scene (1st person, present tense, include thoughts) with: The sky scraper swayed, as she watched the window in front of her on the 21 floor explode...
生成結果1
I'd rather be anywhere else right now. My boss is breathing down my neck, demanding that we meet this impossible deadline for our new software release. I glance at my phone, willing it to ring with some sort of reprieve or distraction, but all I get is a blank screen.
As I sit here in the stuffy office, staring out the window at the cityscape below, something catches my eye. The skyscraper across the street sways, almost imperceptibly, and for a moment I wonder if it's just my imagination playing tricks on me. But then the glass of the 21st floor explodes in a shower of sparks and debris, sending shards raining down onto the sidewalk far below.
My heart skips a beat as I watch, frozen in shock, my mind racing with thoughts of what could have caused such destruction. Is this some kind of terrorist attack? A structural failure? An accident?
People are screaming now, running for their lives as they try to get away from the falling glass and whatever else might come crashing down next. Sirens blare in the distance as emergency responders rush to the scene, but it's clear that no one is going anywhere until this mess gets sorted out.
My boss storms into the room, his face a mask of anger and disbelief. "What the hell happened?" he demands, not even sparing me a glance before turning back to stare at the chaos unfolding outside.
I shake my head, unable to form any coherent response as I continue to watch in horror. The windows on other floors begin to shatter now, one by one, until the entire building is engulfed in an apocalyptic scene of destruction and panic.
My mind reels with thoughts of all those people trapped inside. What if they can't get out? What if it's a deliberate attack designed to inflict maximum damage?
As I stand there, paralyzed with fear, my boss grabs me by the arm and shakes me roughly. "Snap out of it!" he yells in my ear. "We need you to focus on getting this release ready, not standing around gawking at the spectacle outside!"
I wince as his grip tightens painfully, but I force myself to break free from his hold and turn back to the computer screen in front of me. My hands tremble as I try to will them to move, to click through menus and code lines with the same ease that they always have before.
But now everything seems so pointless, so insignificant in the face of this sudden, inexplicable catastrophe. How can we continue working on software when there's a war unfolding right outside our window?
As I sit here, trying desperately to distract myself from the carnage across the street, one thought keeps repeating itself in my mind: We could all be next. The skyscraper sways ominously now, as if it too knows that its time is coming soon...
生成結果2(重複懲罰設置為1.04)
I'm sitting in my cubicle, staring blankly at my computer screen. It's another mundane Monday morning at the financial firm. I can't help but feel a sense of dread as I think about the mountain of work that awaits me.
My gaze drifts to the window in front of me, on the 21st floor of the sky scraper. The reflection of the glass catches my eye, and for a moment, I'm lost in thought, contemplating my life choices that led me here. Suddenly, the glass shatters, and a deafening roar fills the air.
The sky scraper sways, as if it's been punched by an invisible hand. My heart races as I watch the window in front of me explode. Shards of glass fly everywhere, and I instinctively duck, covering my head with my arms. The sound is unbearable, like a thousand screaming banshees.
In the chaos, I catch a glimpse of my coworker, Sarah, lying on the floor, blood gushing from her leg. My mind races with fear, wondering if it's possible that she could be alive. I try to call out to her, but the roar of the explosion drowns out my voice.
As quickly as it began, the shaking subsides, and an eerie silence falls over the office. I cautiously raise my head, taking in the devastation. Glass covers everything, and the once pristine walls are now smeared with blood.
I'm still in shock, my mind numb from the sheer horror of what just happened. But as I look around at the carnage, a spark of determination ignites within me. I have to help Sarah. I have to make sure she's okay.
I crawl over the debris, ignoring the pain in my hands and knees, my sole focus on reaching my coworker. "Sarah," I call out, my voice shaking. "Can you hear me?"
There is no response. Panic sets in, and I frantically scan the area around her, searching for any sign of life. And then, I see it - a faint pulse in her neck. She's alive.
With newfound strength, I manage to drag Sarah onto my back, doing everything I can to keep her stable and warm. I start crawling towards the emergency exit, praying that it's still intact. The pain in my arms and legs is excruciating, but I ignore it, focusing on the task at hand.
As I pull myself up to the door, I catch a glimpse of flames licking at the ceiling. We have to get out of here. Now.
I hoist Sarah onto my back once more, and with a burst of adrenaline, I throw open the door and tumble out onto the fire escape. The cold metal under my hands is a welcome relief, and I start to make my way down the stairs as fast as I can.
My thoughts are a jumbled mess of fear and survival instincts, but one thing is clear - I have to save Sarah. We're going to make it out of this alive. We have to.
As I descend the stairs, the weight of what just happened begins to sink in. The sky scraper is on fire, my coworkers are dead or dying, and I'm not sure if we'll ever make it out. But I refuse to give up. I have to keep going, for Sarah's sake, and for mine.
One step at a time, I make my way down the stairs, my heart pounding in my chest, my mind racing with thoughts of what will happen once we reach the ground. Will the firefighters be there? Will we be able to get to safety? So many questions swirl through my mind, but I push them aside, focusing on the task at hand - saving Sarah and getting out of this nightmare alive.
Finally, we reach the bottom of the stairs, and I can see the bright lights of the fire trucks and ambulances parked below. Relief washes over me as I emerge onto the street, gasping for air. We did it. We made it.
I gently set Sarah down on the ground, her limp body a stark contrast to the adrenaline coursing through my veins. As paramedics rush to our aid, I look up at the sky scraper, now engulfed in flames and smoke. The explosion was just the beginning of this nightmare, but I know that as long as Sarah is alive, we can face whatever comes next. Together.
高級用法
Write me a science fiction story in 1st person present tense where the main character is a 15 year girl meets The Terminator with Dr Who materializing 3/4 through the story to save her while there is a tornado of sharks baring down on them. The setting is inside the Canadian National tower restaurant on a Saturday. The length of this story is 1000 words. For each character in the story ROLE PLAY them, and have them react to the situation/setting, events and each other naturally. This includes the main characters, the background character including kitchen staff and other patrons. The sharks should also have “character” too. Treat the tower and the restaurant too as characters. Spice up the narrative to the extreme with reactions all over the setting including character actions, and dialog. The Dr Who and The Terminator should also react to the situation.
文檔中未給出此示例的生成結果,可根據模型進行相應生成。
📚 詳細文檔
模型版本說明
- V2 - Eight - Orbs - Of - Power:這是Dark Planet 8B的第2版,增強了分辨率(可以說有32級梯度),僅指令層就使用了8個模型。這大大提高了細節水平,以及模型的“講故事能力”。除非更改“重複懲罰”設置,否則該版本傾向於輸出較短內容。
模型注意事項
- 與L3 Instruct相比,細節、散文和虛構寫作能力顯著提高。
- 若要獲得更多樣化的散文(句子/段落/對話),可提高溫度設置和/或在提示中添加更多指令。
- 角色扮演玩家:注意不要將溫度設置得過高,以免影響指令遵循度。
- 該模型使用重複懲罰設置為1或更高時效果較好,建議設置為1.05+。
- “重複懲罰”設置的變化(如1.05、1.051、1.06等)會極大地改變輸出內容,在某些情況下還會改變輸出長度。
- 若需要特定類型的散文(如恐怖風格),可在提示中添加“(vivid horror)”或“(graphic vivid horror)”(不帶引號)。
- 幾乎所有GPT式表達已被去除。
- 這不是一個“皆大歡喜”的模型,它有負面傾向。
- 輸出長度會有所不同,除非指定輸出大小,否則該模型傾向於輸出較短內容。
- 對於創意用途,不同的量化方式會產生略有不同的輸出。
- 由於該模型的高穩定性和壓縮特性,所有量化方式的運行水平都高於平均水平。
- 如果使用rope擴展上下文,需提高溫度和指令細節水平以彌補“rope問題”。
模型運行設置
- 平滑設置:在“KoboldCpp”、“oobabooga/text - generation - webui”或“Silly Tavern”中,將“平滑因子”設置為1.5到2.5。在“KoboldCpp”中,路徑為Settings -> Samplers -> Advanced -> "Smooth_F";在“text - generation - webui”中,在參數設置的右下角;在“Silly Tavern”中,該設置稱為“Smoothing”。
- 重複懲罰設置:將重複懲罰提高到1.1到1.15(如果使用“平滑因子”,則無需進行此設置)。
- 二次採樣設置:如果運行AI模型的界面/程序支持“二次採樣”(“平滑”),可按說明進行調整。
最高質量設置/最佳操作指南/參數和採樣器
這是一個“1類”模型。有關該模型使用的所有設置(包括其“類別”的具體設置)、示例生成以及高級設置指南(通常會解決任何模型問題),包括提高所有用例(包括聊天、角色扮演等)模型性能的方法,請參閱[ https://huggingface.co/DavidAU/Maximizing - Model - Performance - All - Quants - Types - And - Full - Precision - by - Samplers_Parameters ]。
🔧 技術細節
該模型是通過將以下模型的部分“嫁接”/“融合”在一起創建的:
- [ https://huggingface.co/Sao10K/L3 - 8B - Stheno - v3.2 ]
- [ https://huggingface.co/NeverSleep/Llama - 3 - Lumimaid - 8B - v0.1 - OAS ]
- [ https://huggingface.co/Hastagaras/Jamet - 8B - L3 - MK.V - Blackroot ]
- 其他使用的模型:L3 - cat - 8b - instruct - v1、L3 - gutenberg - 8B、L3 - Hathor_Sofit - L3 - 8B - v1、L3 - OpenBioLLM - 8B、L3 - Pantheon - RP - 1.0 - 8b、Meta - Llama - 3 - 8B - Instruct
📄 許可證
該模型使用Apache 2.0許可證。
模型可選增強設置
以下內容可替代“系統提示”或“系統角色”以進一步增強模型。也可在新聊天開始時使用,但必須確保在聊天過程中保留該設置。在這種情況下,增強效果不如使用“系統提示”或“系統角色”明顯。請嚴格按照以下內容複製粘貼,不要換行或斷行,保持回車符不變。
Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
Here are your skillsets:
[MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
[*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
Here are your critical instructions:
Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
此增強設置未用於生成以下示例。
⚠️ 重要提示
- 該模型輸出內容包含不適合所有受眾的內容,如生動的散文、直觀的細節、暴力、恐怖、髒話等未經過濾的內容。
💡 使用建議
- 若要獲得特定風格的輸出,可在提示中添加相應關鍵詞。
- 合理調整溫度和重複懲罰設置,以獲得滿意的輸出長度和內容多樣性。
Phi 2 GGUF
其他
Phi-2是微軟開發的一個小型但強大的語言模型,具有27億參數,專注於高效推理和高質量文本生成。
大型語言模型 支持多種語言
P
TheBloke
41.5M
205
Roberta Large
MIT
基於掩碼語言建模目標預訓練的大型英語語言模型,採用改進的BERT訓練方法
大型語言模型 英語
R
FacebookAI
19.4M
212
Distilbert Base Uncased
Apache-2.0
DistilBERT是BERT基礎模型的蒸餾版本,在保持相近性能的同時更輕量高效,適用於序列分類、標記分類等自然語言處理任務。
大型語言模型 英語
D
distilbert
11.1M
669
Llama 3.1 8B Instruct GGUF
Meta Llama 3.1 8B Instruct 是一個多語言大語言模型,針對多語言對話用例進行了優化,在常見的行業基準測試中表現優異。
大型語言模型 英語
L
modularai
9.7M
4
Xlm Roberta Base
MIT
XLM-RoBERTa是基於100種語言的2.5TB過濾CommonCrawl數據預訓練的多語言模型,採用掩碼語言建模目標進行訓練。
大型語言模型 支持多種語言
X
FacebookAI
9.6M
664
Roberta Base
MIT
基於Transformer架構的英語預訓練模型,通過掩碼語言建模目標在海量文本上訓練,支持文本特徵提取和下游任務微調
大型語言模型 英語
R
FacebookAI
9.3M
488
Opt 125m
其他
OPT是由Meta AI發佈的開放預訓練Transformer語言模型套件,參數量從1.25億到1750億,旨在對標GPT-3系列性能,同時促進大規模語言模型的開放研究。
大型語言模型 英語
O
facebook
6.3M
198
1
基於transformers庫的預訓練模型,適用於多種NLP任務
大型語言模型
Transformers

1
unslothai
6.2M
1
Llama 3.1 8B Instruct
Llama 3.1是Meta推出的多語言大語言模型系列,包含8B、70B和405B參數規模,支持8種語言和代碼生成,優化了多語言對話場景。
大型語言模型
Transformers 支持多種語言

L
meta-llama
5.7M
3,898
T5 Base
Apache-2.0
T5基礎版是由Google開發的文本到文本轉換Transformer模型,參數規模2.2億,支持多語言NLP任務。
大型語言模型 支持多種語言
T
google-t5
5.4M
702
精選推薦AI模型
Llama 3 Typhoon V1.5x 8b Instruct
專為泰語設計的80億參數指令模型,性能媲美GPT-3.5-turbo,優化了應用場景、檢索增強生成、受限生成和推理任務
大型語言模型
Transformers 支持多種語言

L
scb10x
3,269
16
Cadet Tiny
Openrail
Cadet-Tiny是一個基於SODA數據集訓練的超小型對話模型,專為邊緣設備推理設計,體積僅為Cosmo-3B模型的2%左右。
對話系統
Transformers 英語

C
ToddGoldfarb
2,691
6
Roberta Base Chinese Extractive Qa
基於RoBERTa架構的中文抽取式問答模型,適用於從給定文本中提取答案的任務。
問答系統 中文
R
uer
2,694
98