Skip to content

shibuina/PI-RIG

Repository files navigation

Physics-Informed RIG: Goal-Conditioned RL with Physics Constraints

This project implements Physics-Informed RIG (Representation Learning for Goal-Conditioned RL) using RLKit and Multiworld. The main contribution is comparing physics-informed vs standard representation learning for robotic pick-and-place tasks.

🎯 Project Overview

This research compares two approaches to goal-conditioned reinforcement learning:

  1. Physics-Informed RIG: Incorporates physics constraints (gravity, momentum, contact forces) into VAE representation learning
  2. Standard RIG: Uses standard VAE representation learning without physics constraints

The comparison is performed on the Sawyer Pick and Place robotic manipulation task.


🔧 Setup Instructions

  1. Clone this repository and ensure you have conda installed.
  2. Run the setup script:
bash setup.sh

This script will:

  • Create a Conda environment named final from env.yml
  • Install rlkit and multiworld using pip install -e
  • Set up all necessary dependencies

🚀 Running the Code

Run the Physics-Informed Variational Autoencoder (VAE) for representation learning:

python rlkit/examples/rig/pusher/physics_informed_rig.py

Run the Base Reinforcement learning based on Imaginary Goals (RIG):

🚀 Main Experiments

Physics-Informed vs Standard RIG Comparison

Run the main paper experiment comparing both approaches: Note: before you run the code, please change your sys path (line 5-7 in code file) to your main folder directory to handle import problems

python rlkit/examples/rig/pusher/physics_informed_rig.py
# Navigate to the pick and place experiments
cd rlkit/examples/rig/pick_and_place

# # Run VAE-only comparison 
# python physics_vs_standard_rig_comparison.py

# Run complete RIG training comparison (VAE + RL training)
python full_rig_training_comparison.py

Similarly, for pusher run:

python rlkit/examples/rig/pusher/physics_informed_rig.py

Visualization

The result will be saved in 2 folder with suffix *full-rig-physics and *full-rig-standard, you can copy the subfolder to a new folder to compare between 2 algorithm Example:

python -m viskit.frontend rlkit/data/compare_pick_and_place_500ep/ 

For pusher, run:

python -m viskit.frontend rlkit/data/compare_pusher/

For a simple demonstration of physics-informed representation learning:

cd rlkit/examples/rig/pick_and_place
python simple_physics_comparison.py

These scripts use the visual-based Pusher environment to demonstrate representation learning and reinforcement learning using RIG (Representation Learning for Goal-Conditioned RL) with physics-informed constraints.


📊 Visualizing Results and Policies

Viewing Training Results

During training, results are saved to:

rlkit/data/<exp_prefix>/<foldername>/

To visualize training progress and metrics, use viskit:

python viskit/viskit/frontend.py rlkit/data/<exp_prefix>/

For a single experiment:

python viskit/viskit/frontend.py rlkit/data/<exp_prefix>/<foldername>/

Visualizing Trained Policies

python rlkit/scripts/sim_policy.py rlkit/data/<exp_prefix>/<foldername>/params.pkl

Example: After running physics_informed_rig.py, you might find results in a folder like:

rlkit/data/pusher-physics-rig/2025_10_05_12_34_56_000000--s-0/

Then visualize with:

python rlkit/scripts/sim_policylicy.py rlkit/data/pusher-physics-rig/2025_10_05_12_34_56_000000--s-0/params.pkl

📁 Project Structure

.
├── README.md                           # This file
├── setup.sh                           # Environment setup script
├── env.yml                            # Conda environment specification
├── requirements.txt                   # Python package requirements
├── rlkit/                             # Modified RLKit with physics-informed components
│   ├── examples/rig/pick_and_place/   # Main pick-and-place experiments
│   │   ├── physics_vs_standard_rig_comparison.py    # VAE comparison (working)
│   │   ├── full_rig_training_comparison.py          # Complete RIG comparison
│   │   ├── simple_physics_comparison.py             # Quick demo
│   │   ├── physics_informed_rig.py                  # Core implementation
│   │   └── README.md                               # Detailed experiment guide
│   ├── rlkit/torch/vae/               # VAE implementations
│   │   ├── pick_and_place_physics.py  # Physics-informed VAE trainer
│   │   └── vae_trainer.py             # Standard VAE trainer
│   └── rlkit/torch/grill/             # GRILL framework integration
│       └── common.py                  # Common VAE training utilities
│
├── multiworld/                        # Goal-conditioned environments
│   └── envs/mujoco/sawyer_xyz/        # Sawyer robot environments
└── viskit/                           # Visualization tools

🔬 Key Features

Physics-Informed Representation Learning

  • Temporal Consistency: Ensures smooth latent transitions
  • Momentum Conservation: Enforces physics-based momentum constraints
  • Gravity Modeling: Incorporates gravitational effects
  • Contact Forces: Models object-object and robot-object interactions
  • Grasp Stability: Ensures realistic grasping physics

Comparison Metrics

  • Success Rates: Task completion rates for pick-and-place
  • Sample Efficiency: Learning speed and data requirements
  • Generalization: Performance on unseen object configurations
  • Representation Quality: Latent space interpretability and structure

📊 Expected Results

The physics-informed approach should demonstrate:

  • Higher success rates on pick-and-place tasks
  • Better generalization to new object configurations
  • More sample-efficient RL training
  • More interpretable learned representations
  • Improved stability in manipulation policies

🛠 Development Notes

This project extends the original RLKit and Multiworld frameworks with:

  • Custom physics-informed VAE trainers
  • Enhanced loss functions incorporating physics constraints
  • Robotic manipulation-specific physics modeling
  • Comprehensive comparison and evaluation tools

📖 References

  • RLKit: Deep RL algorithms and utilities
  • Multiworld: Goal-conditioned environments
  • Original RIG paper: "Hindsight Experience Replay for Image-Based Robotic Manipulation"

📚 References


Feel free to reach out if you encounter any issues or want to extend this setup to other environments or algorithms.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •