N

Nora Long

Developed by declare-lab
A vision-language-action model trained on the Open X Embodied Dataset, generating robot motions from language instructions and camera images
Downloads 673
Release Time : 4/29/2025

Model Overview

Nora Long-range Version is an open-source vision-language-action model fine-tuned from Tongyi Qianwen 2.5 VL-3B, specifically designed for robotic manipulation tasks. It employs 5-step action span pre-training and demonstrates excellent performance in the LIBERO simulation environment

Model Features

Long-range Motion Prediction
Utilizes 5-step action span pre-training, suitable for tasks requiring long-term planning
Multimodal Input
Processes both language instructions and visual inputs simultaneously for more precise motion control
Open-source and Fine-tunable
Provides complete training code and model checkpoints, supporting user-customized fine-tuning

Model Capabilities

Vision-Language Understanding
Robot Motion Prediction
Multimodal Task Execution
Long-range Motion Planning

Use Cases

Robot Control
Robotic Arm Manipulation
Controls robotic arms to perform grasping, placing and other operations based on natural language instructions and visual inputs
Validated effective in WidowX robot tasks and LIBERO simulation environment
Automated Assembly
Completes complex assembly tasks through visual and language guidance
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase