B

Bielik 1.5B V3.0 Instruct FP8 Dynamic

Developed by speakleash
This is an FP8 dynamic quantization version based on the Bielik-1.5B-v3.0-Instruct model, adapted for vLLM or SGLang inference frameworks. It uses AutoFP8 quantization technology to reduce parameter bytes from 16-bit to 8-bit, significantly lowering disk space and GPU VRAM requirements.
Downloads 31
Release Time : 5/4/2025

Model Overview

This model is a version of Bielik-1.5B-v3.0-Instruct with FP8 data type quantization applied to weights and activations, primarily designed for Polish text generation tasks.

Model Features

FP8 dynamic quantization
Utilizes AutoFP8 quantization technology to reduce parameter bytes from 16-bit to 8-bit, decreasing disk space and GPU VRAM requirements by approximately 50%.
Efficient inference
Adapted for vLLM >= 0.5.0 or SGLang inference frameworks to optimize inference efficiency.
Polish language optimization
Specially optimized for Polish text generation tasks.

Model Capabilities

Polish text generation
Instruction following

Use Cases

Intelligent assistant
Polish Q&A system
Used for building Polish intelligent Q&A assistants
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase