Model Overview
Model Features
Model Capabilities
Use Cases
🚀 PULI GPT 3SX - GGML
This repository provides GPT - NeoX GGML format model files for NYTK's PULI GPT 3SX, enabling text generation tasks.
🚀 Quick Start
This repo contains GPT - NeoX GGML format model files for NYTK's PULI GPT 3SX. Note that these GGMLs are not compatible with llama.cpp, text - generation - webui or llama - cpp - python. Refer to the compatibility section below for suitable tools.
✨ Features
- Multiple Repository Options: Offers GPTQ models for GPU inference with various quantisation parameters, multiple - bit GGML models for CPU + GPU inference, and the original unquantised fp16 model in pytorch format for GPU inference and further conversions.
- Diverse Compatibility: Can be used with multiple tools such as KoboldCpp, LM Studio, LoLLMs - WebUI, ctransformers, rustformers' llm, and the example
gpt - neox
binary provided with ggml.
📦 Installation
No specific installation steps are provided in the original README.
💻 Usage Examples
Basic Usage
from transformers import GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
gen_tokens = model.generate(
input_ids,
do_sample=True,
temperature=0.9,
max_length=100,
)
gen_text = tokenizer.batch_decode(gen_tokens)[0]
print(gen_text)
Advanced Usage
from transformers import pipeline, GPTNeoXForCausalLM, AutoTokenizer
model = GPTNeoXForCausalLM.from_pretrained("NYTK/PULI-GPT-3SX")
tokenizer = AutoTokenizer.from_pretrained("NYTK/PULI-GPT-3SX")
prompt = "Elmesélek egy történetet a nyelvtechnológiáról."
generator = pipeline(task="text-generation", model=model, tokenizer=tokenizer)
print(generator(prompt)[0]["generated_text"])
📚 Documentation
Repositories available
- GPTQ models for GPU inference, with multiple quantisation parameter options.
- 2, 3, 4, 5, 6 and 8 - bit GGML models for CPU+GPU inference
- NYTK's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions
Prompt template
{prompt}
Compatibilty
These files are not compatible with llama.cpp, text - generation - webui or llama - cpp - python. They can be used with:
- KoboldCpp, a powerful inference engine based on llama.cpp with full GPU acceleration and good UI.
- LM Studio, a fully featured local GUI for GGML inference on Windows and macOS.
- LoLLMs - WebUI a web UI which supports nearly every backend out there. Use ctransformers backend for support for this model.
- ctransformers: for use in Python code, including LangChain support.
- rustformers' llm
- The example
gpt - neox
binary provided with ggml
As other options become available, the README will be updated. (Let the author know in the Community tab if something is missed!)
Tutorial for using LoLLMs - WebUI
Provided files
Property | Details |
---|---|
puli - gpt - 3sx.ggmlv1.q4_0.bin | Quant method: q4_0; Bits: 4; Size: 3.86 GB; Max RAM required: 6.36 GB; Use case: 4 - bit. |
puli - gpt - 3sx.ggmlv1.q4_1.bin | Quant method: q4_1; Bits: 4; Size: 4.29 GB; Max RAM required: 6.79 GB; Use case: 4 - bit. Higher accuracy than q4_0 but not as high as q5_0. However, has quicker inference than q5 models. |
puli - gpt - 3sx.ggmlv1.q5_0.bin | Quant method: q5_0; Bits: 5; Size: 4.72 GB; Max RAM required: 7.22 GB; Use case: 5 - bit. Higher accuracy, higher resource usage and slower inference. |
puli - gpt - 3sx.ggmlv1.q5_1.bin | Quant method: q5_1; Bits: 5; Size: 5.15 GB; Max RAM required: 7.65 GB; Use case: 5 - bit. Even higher accuracy, resource usage and slower inference. |
puli - gpt - 3sx.ggmlv1.q8_0.bin | Quant method: q8_0; Bits: 8; Size: 7.29 GB; Max RAM required: 9.79 GB; Use case: 8 - bit. Almost indistinguishable from float16. High resource use and slow. Not recommended for most users. |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
Discord
For further support, and discussions on these models and AI in general, join TheBloke AI's Discord server.
Thanks, and how to contribute
Thanks to the chirper.ai team! Many people have asked about contributing. The author enjoys providing models and helping people, and would like to spend more time on it and expand into new projects like fine - tuning/training. If you're able and willing to contribute, it will be gratefully received and will help the author keep providing more models and start new AI projects. Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
- Patreon: https://patreon.com/TheBlokeAI
- Ko - Fi: https://ko - fi.com/TheBlokeAI
Special thanks to: Luke from CarbonQuill, Aemon Algiz. Patreon special mentions: Ajan Kanaga, David Ziegler, Raymond Fosdick, SuperWojo, Sam, webtim, Steven Wood, knownsqashed, Tony Hughes, Junyu Yang, J, Olakabola, Dan Guido, Stephen Murray, John Villwock, vamX, William Sang, Sean Connelly, LangChain4j, Olusegun Samson, Fen Risland, Derek Yates, Karl Bernard, transmissions 11, Trenton Dambrowitz, Pieter, Preetika Verma, Swaroop Kallakuri, Andrey, Slarti, Jonathan Leane, Michael Levine, Kalila, Joseph William Delisle, Rishabh Srivastava, Deo Leter, Luke Pendergrass, Spencer Kim, Geoffrey Montalvo, Thomas Belote, Jeffrey Morgan, Mandus, ya boyyy, Matthew Berman, Magnesian, Ai Maven, senxiiz, Alps Aficionado, Luke @flexchar, Raven Klaugh, Imad Khwaja, Gabriel Puliatti, Johann - Peter Hartmann, usrbinkat, Spiking Neurons AB, Artur Olbinski, chris gileta, danny, Willem Michiel, WelcomeToTheClub, Deep Realms, alfie_i, Dave, Leonard Tan, NimbleBox.ai, Randy H, Daniel P. Andersen, Pyrater, Will Dee, Elle, Space Cruiser, Gabriel Tamborski, Asp the Wyvern, Illia Dulskyi, Nikolai Manek, Sid, Brandon Frisco, Nathan LeClaire, Edmond Seymore, Enrico Ros, Pedro Madruga, Eugene Pentland, John Detwiler, Mano Prime, Stanislav Ovsiannikov, Alex, Vitor Caleffi, K, biorpg, Michael Davis, Lone Striker, Pierre Kircher, theTransient, Fred von Graf, Sebastain Graf, Vadim, Iucharbius, Clay Pascal, Chadd, Mesiah Bishop, terasurfer, Rainer Wilmers, Alexandros Triantafyllidis, Stefan Sabev, Talal Aujan, Cory Kujawski, Viktor Bowallius, subjectnull, ReadyPlayerEmma, zynix
Thank you to all the generous patrons and donaters!
Original model card: NYTK's PULI GPT 3SX
PULI GPT - 3SX (6.7 billion parameter)
For further details, see our demo site.
- Hungarian GPT - NeoX model (6.7 billion parameter)
- Trained with EleutherAI's GPT - NeoX [github](https://github.com/EleutherAI/gpt - neox)
- Dataset: 36.3 billion words
- Checkpoint: 150 000 steps
Limitations
- max_seq_length = 2048
Citation
If you use this model, please cite the following paper:
@inproceedings {yang - puli,
title = {Jönnek a nagyok! BERT - Large, GPT - 2 és GPT - 3 nyelvmodellek magyar nyelvre},
booktitle = {XIX. Magyar Számítógépes Nyelvészeti Konferencia (MSZNY 2023)},
year = {2023},
publisher = {Szegedi Tudományegyetem, Informatikai Intézet},
address = {Szeged, Hungary},
author = {Yang, Zijian Győző and Dodé, Réka and Ferenczi, Gergő and Héja, Enikő and Jelencsik - Mátyus, Kinga and Kőrös, Ádám and Laki, László János and Ligeti - Nagy, Noémi and Vadász, Noémi and Váradi, Tamás},
pages = {247--262}
}
📄 License
The model is licensed under cc - by - nc - 4.0.

