EngineAI RL Workspace is a versatile and universal RL framework for legged robots, including environment, training and eval for RL algorithms. It empowers seamless development and evaluation of RL algorithms for legged robots with modular design, extensibility, and comprehensive tools.
Maintainer: Darrell
Affiliation: Shenzhen EngineAI Robotics Technology Co., Ltd.
Contact: info@engineai.com.cn
Divided into individual modules, not only ensures readability, but also ensures that editing a single module will not affect other modules
- Env Obs, Domain Rands, Goals, Rewards
- Algo Runners, Algos, Networks, Storage
- Train and play are controlled by the same runner logic and do not need to be written repeatedly
- Various modules can inherit according to the needs of exp, reducing repetitive writing
- Algo's change does not require changing Env, only changing the way output is received from Env
- Record videos of training or playing
- Custom Env variables and record play data to facilitate Run comparison under controlled variables
- Save the code file during training for easy comparison between runs
- Automatically restore saved code files during resume and play, avoiding the tedious process of searching for corresponding versions
- Save git info
- Save the
.jsonfile for recording parameters and have a built-in tool to convert it to a.pyfile, which is convenient for modification according to a certain run
- Support multi-GPU for faster training
- Convert
.ptfiles to.onnxand.mnn
Our documentation page provides everything you need to get started, including detailed tutorials and step-by-step guides. Follow these links to learn more about:
Please see the Troubleshooting section.
- Please use GitHub Discussions for discussing ideas, asking questions, and requests for new features.
- Github Issues should only be used to track executable pieces of work with a definite scope and a clear deliverable. These can be fixing bugs, documentation issues, new features, or general updates.
EngineAI RL Workspace is released under BSD-3 License.
This repository is built upon the support and contributions of the following open-source projects. Special thanks to:
- legged_gym: The foundation for training and running codes.
- humanoid_gym: The reward and terrain generation code in this project is inspired by the humanoid_gym library.
- rsl_rl: Reinforcement learning algorithm implementation.
- AMP_for_hardware: The AMP (Adversarial Motion Priors) algorithm implementation.