JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
-
Updated
Mar 17, 2025 - Python
JMLR: OmniSafe is an infrastructural framework for accelerating SafeRL research.
🚀 A fast safe reinforcement learning library in PyTorch
🤖 Elegant implementations of offline safe RL algorithms in PyTorch
🔥 Datasets and env wrappers for offline safe reinforcement learning
PyTorch implementation of Constrained Reinforcement Learning for Soft Actor Critic Algorithm
A Survey Analyzing Generalization in Deep Reinforcement Learning
A Multiplicative Value Function for Safe and Efficient Reinforcement Learning. IROS 2023.
Official Code Repository for the POLICEd-RL Paper: https://www.roboticsproceedings.org/rss20/p104.html
[AAAI 2024 (Oral)] Safety-MuJoCo Environments.
Correct-by-synthesis reinforcement learning with temporal logic constraints (CoRL)
Author implementation of DSUP(q) algorithms from the NeurIPS 2024 paper "Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning"
Poster about Curriculum Induction for Safe Reinforcement Learning
SAFIRL: Shielded RL with CBF/MPC on Franka-MuJoCo — reproducible training/evaluation, STL-based verification, benchmarks, Docker/CI.
[ECAI-2025] SPOWL: A JAX-based Safe RL framework that adaptively combines planning and policy learning with dynamic safety thresholds.
Blog Post about Curriculum Induction for Safe Reinforcement Learning
Official implementation of C-TRPO
sTRPO: Safe, Trust Region Policy Optimization for Constrained Reinforcement Learning
Alternative (torch) author implementation of algorithms from the NeurIPS 2024 paper "Action Gaps and Advantages in Continuous-Time Distributional Reinforcement Learning"
Add a description, image, and links to the safe-rl topic page so that developers can more easily learn about it.
To associate your repository with the safe-rl topic, visit your repo's landing page and select "manage topics."