S

Sac Hopper V3

Developed by sb3
This is a reinforcement learning model based on the SAC algorithm, designed to control robot hopping movements in the Hopper-v3 environment.
Downloads 44
Release Time : 6/2/2022

Model Overview

The model is trained using the Soft Actor-Critic (SAC) algorithm, specifically designed to solve continuous control tasks in the Hopper-v3 environment.

Model Features

Based on SAC Algorithm
Uses the Soft Actor-Critic algorithm, suitable for handling reinforcement learning problems in continuous action spaces.
Stable Training
Implemented via stable-baselines3, providing a reliable training process.
High Performance
Achieves an average reward of 2266.78 in the Hopper-v3 environment.

Model Capabilities

Continuous Action Control
Robot Motion Control
Reinforcement Learning Task Solving

Use Cases

Robot Control
Hopper Robot Jump Control
Controls the jumping movements of a simulated Hopper robot.
Average reward 2266.78 +/- 1121.81
Reinforcement Learning Research
Continuous Control Benchmarking
Can serve as a benchmark model for continuous control tasks.
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
© 2025AIbase