đ Phi-3 Mini-4K-Instruct ONNX model for in-browser inference
Run Phi3-mini-4K entirely in the browser! This repository provides an optimized Web version of the ONNX Phi-3-mini-4k-instruct model to accelerate inference in the browser with ONNX Runtime Web.
đ Quick Start
Running Phi3-mini-4K entirely in the browser is now possible! Check out this demo.
⨠Features
- The Phi-3-Mini-4K-Instruct is a 3.8B parameters, lightweight, state-of-the-art open model. It's trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data, focusing on high-quality and reasoning dense properties.
- When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3 Mini-4K-Instruct showcases a robust and state-of-the-art performance among models with less than 13 billion parameters.
đĻ Installation
This section mainly focuses on how to run the model, which is related to the usage rather than traditional installation steps. So, we'll cover it in the "đģ Usage Examples" section.
đģ Usage Examples
Basic Usage
ONNX Runtime Web is a JavaScript library that enables web developers to deploy machine learning models directly in web browsers, offering multiple backends leveraging hardware acceleration. It's recommended to use the WebGPU backend to run Phi-3-mini efficiently.
Here is an E2E example for running this optimized Phi3-mini-4K for the web, with ONNX Runtime harnessing WebGPU.
Supported devices and browsers with WebGPU: Chrome 113+ and Edge 113+ for Mac, Windows, ChromeOS, and Chrome 121+ for Android. Please visit here for tracking WebGPU support in browsers.
Advanced Usage
There isn't a clear advanced usage in the original text. But in general, if you want to optimize a fine - tuned Phi3 - mini - 4k model to run with ONNX Runtime Web, please follow this Olive example. Olive is an easy - to - use model optimization tool for generating an optimized ONNX model to efficiently run with ONNX Runtime across platforms.
đ Documentation
Performance Metrics
Performance varies between GPUs. The more powerful the GPU, the faster the speed. On a NVIDIA GeForce RTX 4090: ~42 tokens/second.
Additional Details
To obtain other optimized Phi3 - mini - 4k ONNX models for server platforms, Windows, Linux, Mac desktops, and mobile, please visit Phi - 3 - mini - 4k - instruct onnx model. The model differences in the web version compared to other versions:
- The model is fp16 with int4 block quantization for weights.
- The 'logits' output is fp32.
- The model uses MHA instead of GQA.
- The onnx and external data file need to stay below 2GB to be cacheable in chromium.
đ§ Technical Details
The technical details are mainly covered in the "Additional Details" section, including the model quantization, output format, and the requirements for file size.
đ License
This model is licensed under the MIT license.
Model Description
Property |
Details |
Developed by |
Microsoft |
Model Type |
ONNX |
Inference Language(s) (NLP) |
JavaScript |
License |
MIT |
Model Description |
This is the web version of the Phi - 3 Mini - 4K - Instruct model for ONNX Runtime inference. |
Model Card Contact
guschmue, qining