This project implements Physics-Informed RIG (Representation Learning for Goal-Conditioned RL) using RLKit and Multiworld. The main contribution is comparing physics-informed vs standard representation learning for robotic pick-and-place tasks.
This research compares two approaches to goal-conditioned reinforcement learning:
- Physics-Informed RIG: Incorporates physics constraints (gravity, momentum, contact forces) into VAE representation learning
- Standard RIG: Uses standard VAE representation learning without physics constraints
The comparison is performed on the Sawyer Pick and Place robotic manipulation task.
- Clone this repository and ensure you have
condainstalled. - Run the setup script:
bash setup.shThis script will:
- Create a Conda environment named
finalfromenv.yml - Install
rlkitandmultiworldusingpip install -e - Set up all necessary dependencies
python rlkit/examples/rig/pusher/physics_informed_rig.pyRun the main paper experiment comparing both approaches: Note: before you run the code, please change your sys path (line 5-7 in code file) to your main folder directory to handle import problems
python rlkit/examples/rig/pusher/physics_informed_rig.py
# Navigate to the pick and place experiments
cd rlkit/examples/rig/pick_and_place
# # Run VAE-only comparison
# python physics_vs_standard_rig_comparison.py
# Run complete RIG training comparison (VAE + RL training)
python full_rig_training_comparison.pySimilarly, for pusher run:
python rlkit/examples/rig/pusher/physics_informed_rig.pyThe result will be saved in 2 folder with suffix *full-rig-physics and *full-rig-standard, you can copy the subfolder to a new folder to compare between 2 algorithm Example:
python -m viskit.frontend rlkit/data/compare_pick_and_place_500ep/ For pusher, run:
python -m viskit.frontend rlkit/data/compare_pusher/For a simple demonstration of physics-informed representation learning:
cd rlkit/examples/rig/pick_and_place
python simple_physics_comparison.pyThese scripts use the visual-based Pusher environment to demonstrate representation learning and reinforcement learning using RIG (Representation Learning for Goal-Conditioned RL) with physics-informed constraints.
During training, results are saved to:
rlkit/data/<exp_prefix>/<foldername>/
To visualize training progress and metrics, use viskit:
python viskit/viskit/frontend.py rlkit/data/<exp_prefix>/For a single experiment:
python viskit/viskit/frontend.py rlkit/data/<exp_prefix>/<foldername>/python rlkit/scripts/sim_policy.py rlkit/data/<exp_prefix>/<foldername>/params.pklExample:
After running physics_informed_rig.py, you might find results in a folder like:
rlkit/data/pusher-physics-rig/2025_10_05_12_34_56_000000--s-0/
Then visualize with:
python rlkit/scripts/sim_policylicy.py rlkit/data/pusher-physics-rig/2025_10_05_12_34_56_000000--s-0/params.pkl.
├── README.md # This file
├── setup.sh # Environment setup script
├── env.yml # Conda environment specification
├── requirements.txt # Python package requirements
├── rlkit/ # Modified RLKit with physics-informed components
│ ├── examples/rig/pick_and_place/ # Main pick-and-place experiments
│ │ ├── physics_vs_standard_rig_comparison.py # VAE comparison (working)
│ │ ├── full_rig_training_comparison.py # Complete RIG comparison
│ │ ├── simple_physics_comparison.py # Quick demo
│ │ ├── physics_informed_rig.py # Core implementation
│ │ └── README.md # Detailed experiment guide
│ ├── rlkit/torch/vae/ # VAE implementations
│ │ ├── pick_and_place_physics.py # Physics-informed VAE trainer
│ │ └── vae_trainer.py # Standard VAE trainer
│ └── rlkit/torch/grill/ # GRILL framework integration
│ └── common.py # Common VAE training utilities
│
├── multiworld/ # Goal-conditioned environments
│ └── envs/mujoco/sawyer_xyz/ # Sawyer robot environments
└── viskit/ # Visualization tools
- Temporal Consistency: Ensures smooth latent transitions
- Momentum Conservation: Enforces physics-based momentum constraints
- Gravity Modeling: Incorporates gravitational effects
- Contact Forces: Models object-object and robot-object interactions
- Grasp Stability: Ensures realistic grasping physics
- Success Rates: Task completion rates for pick-and-place
- Sample Efficiency: Learning speed and data requirements
- Generalization: Performance on unseen object configurations
- Representation Quality: Latent space interpretability and structure
The physics-informed approach should demonstrate:
- Higher success rates on pick-and-place tasks
- Better generalization to new object configurations
- More sample-efficient RL training
- More interpretable learned representations
- Improved stability in manipulation policies
This project extends the original RLKit and Multiworld frameworks with:
- Custom physics-informed VAE trainers
- Enhanced loss functions incorporating physics constraints
- Robotic manipulation-specific physics modeling
- Comprehensive comparison and evaluation tools
- RLKit: Deep RL algorithms and utilities
- Multiworld: Goal-conditioned environments
- Original RIG paper: "Hindsight Experience Replay for Image-Based Robotic Manipulation"
Feel free to reach out if you encounter any issues or want to extend this setup to other environments or algorithms.