# High-precision compression
Virtuoso Lite GGUF
Other
The quantized version of Virtuoso-Lite, quantized using llama.cpp to improve the running efficiency on different hardware.
Large Language Model
V
bartowski
373
4
D AU Mistral 7B Instruct V0.2 Bagel DarkSapling DPO 7B V2.0 Imat Plus GGUF
MIT
This is a fusion model based on Mistral-7B-Instruct, combining DarkSapling's role-playing/story generation capabilities with some features of Bagel, utilizing Imatrix Plus compression technology to enhance quality
Large Language Model
D
DavidAU
122
1
Featured Recommended AI Models