đ OpenHands LM v0.1
OpenHands LM v0.1 is a new open coding model. It's open, reasonably sized, and performs well on software engineering tasks, enabling local deployment and reducing reliance on external services.
đ License
This project is licensed under the MIT license.
đĻ Datasets
đ§ Base Model
đ Quick Start
Download the model from Hugging Face
The model is available on Hugging Face and can be downloaded directly from there.
Create an OpenAI-compatible endpoint with a model serving framework
For optimal performance, it is recommended to serve this model with a GPU using SGLang or vLLM.
Point your OpenHands agent to the new model
Download OpenHands and follow the instructions for using an OpenAI-compatible endpoint.
⨠Features
- Open and Locally Runable: It is open and available on Hugging Face, allowing you to download and run it locally.
- Reasonable Size: With 32B parameters, it can be run locally on hardware such as a single 3090 GPU.
- Strong Performance: Achieves a 37.2% resolve rate on SWE-Bench Verified and performs comparably to models with 20x more parameters.
- Specialized Fine - Tuning: Built on Qwen Coder 2.5 Instruct 32B, using training data generated by OpenHands on diverse open - source repositories and an RL - based framework from SWE-Gym.
- Large Token Context Window: Features a 128K token context window, suitable for large codebases and long - horizon software engineering tasks.
đ Documentation
What is OpenHands LM?
OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks. What sets OpenHands LM apart is our specialized fine - tuning process:
- We used training data generated by OpenHands itself on a diverse set of open - source repositories.
- Specifically, we use an RL - based framework outlined in SWE-Gym, where we set up a training environment, generate training data using an existing agent, and then fine - tune the model on examples that were resolved successfully.
- It features a 128K token context window, ideal for handling large codebases and long - horizon software engineering tasks.
Performance: Punching Above Its Weight
We evaluated OpenHands LM using our latest iterative evaluation protocol on the SWE-Bench Verified benchmark.
The results are impressive:
- 37.2% verified resolve rate on SWE-Bench Verified.
- Performance comparable to models with 20x more parameters, including Deepseek V3 0324 (38.8%) with 671B parameters.
Here's how OpenHands LM compares to other leading open - source models:

As the plot demonstrates, our 32B parameter model achieves efficiency that approaches much larger models. While the largest models (671B parameters) achieve slightly higher scores, our 32B parameter model performs remarkably well, opening up possibilities for local deployment that are not possible with larger models.
The Road Ahead: Our Development Plans
This initial release marks just the beginning of our journey. We will continue enhancing OpenHands LM based on community feedback and ongoing research initiatives.
In particular, it should be noted that the model is still a research preview, and (1) may be best suited for tasks regarding solving github issues and perform less well on more varied software engineering tasks, (2) may sometimes generate repetitive steps, and (3) is somewhat sensitive to quantization, and may not function at full performance at lower quantization levels. Our next releases will focus on addressing these limitations.
We're also developing more compact versions of the model (including a 7B parameter variant) to support users with limited computational resources. These smaller models will preserve OpenHands LM's core strengths while dramatically reducing hardware requirements.
We encourage you to experiment with OpenHands LM, share your experiences, and participate in its evolution. Together, we can create better tools for tomorrow's software development landscape.
Join Our Community
We invite you to be part of the OpenHands LM journey:
By contributing your experiences and feedback, you'll help shape the future of this open - source initiative. Together, we can create better tools for tomorrow's software development landscape.
We can't wait to see what you'll create with OpenHands LM!
đ§ Technical Details
AWQ quantization: done by stelterlab in INT4 GEMM with AutoAWQ by casper - hansen (https://github.com/casper - hansen/AutoAWQ/)
Original Weights by the Mistral AI.
OpenHands LM v0.1
Blog
âĸ
Use it in OpenHands
Autonomous agents for software development are already contributing to a wide range of software development tasks.
But up to this point, strong coding agents have relied on proprietary models, which means that even if you use an open - source agent like OpenHands, you are still reliant on API calls to an external service.
Today, we are excited to introduce OpenHands LM, a new open coding model that meets various software development needs.