Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# LeRobot Documentation
# OpenTau Documentation

This directory contains the Sphinx documentation for the LeRobot project.
This directory contains the Sphinx documentation for the OpenTau project.

## Installation

Expand Down Expand Up @@ -50,4 +50,4 @@ The documentation uses **reStructuredText (.rst)** and **Markdown (.md)** (via M
- **reStructuredText**: The standard for Sphinx. Good for complex directives and autodoc.
- **Markdown**: Supported via MyST. Useful for narrative documentation.

To update the API documentation, edit the docstrings in the Python code. The `index.rst` file is configured to automatically document the `lerobot` package.
To update the API documentation, edit the docstrings in the Python code. The `index.rst` file is configured to automatically document the `OpenTau` package.
2 changes: 1 addition & 1 deletion docs/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ We recommend using `uv <https://docs.astral.sh/uv/>`_ for fast and simple Python

.. code-block:: bash

uv sync --extra tau0 --extra test --extra video_benchmark --extra accelerate --extra dev --extra feetech --extra openai --extra onnx --extra smolvla --extra libero --extra metaworld
uv sync --all-extras

3. **Activate the virtual environment**

Expand Down
8 changes: 4 additions & 4 deletions docs/source/tutorials/RECAP.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Command line to run the SFT training:

.. code-block:: bash

accelerate launch --config_file=<path/to/accelerate_config.yaml> lerobot/scripts/train.py --config_path=<path/to/config.json>
opentau-train --accelerate-config=<path/to/accelerate_config.yaml> --config_path=<path/to/config.json>


Stage 2: Fine-tuning the value function on whole libero dataset till convergence.
Expand Down Expand Up @@ -150,7 +150,7 @@ Command line to run the value function training:

.. code-block:: bash

accelerate launch --config_file=<path/to/accelerate_config.yaml> lerobot/scripts/train.py --config_path=<path/to/config.json>
opentau-train --accelerate-config=<path/to/accelerate_config.yaml> --config_path=<path/to/config.json>


Stage 3: Offline RL training
Expand Down Expand Up @@ -218,7 +218,7 @@ Command line to run the value function fine-tuning:

.. code-block:: bash

accelerate launch --config_file=<path/to/accelerate_config.yaml> lerobot/scripts/train.py --config_path=<path/to/config.json>
opentau-train --accelerate-config=<path/to/accelerate_config.yaml> --config_path=<path/to/config.json>


Sub-stage 3: Compute the advantage for each data point using the fine-tuned value function and calculate the epsilon threshold for setting I\ :sub:`t`\ (Indicator) VLA policy training.
Expand Down Expand Up @@ -302,4 +302,4 @@ Command line to run the VLA policy fine-tuning:

.. code-block:: bash

accelerate launch --config_file=<path/to/accelerate_config.yaml> lerobot/scripts/train.py --config_path=<path/to/config.json>
opentau-train --accelerate-config=<path/to/accelerate_config.yaml> --config_path=<path/to/config.json>
14 changes: 10 additions & 4 deletions docs/source/tutorials/training.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,16 +13,22 @@ To train a model, run the following command:

.. code-block:: bash

accelerate launch lerobot/scripts/train.py --config_path=examples/pi05_config.json
opentau-train --config_path=examples/pi05_config.json

This uses the default accelerate config file at `~/.cache/huggingface/accelerate/default_config.yaml` which is set by running ``accelerate config``.

Optionally, to use a specific accelerate config file (instead of the default), run:

.. code-block:: bash

accelerate launch --config_file=examples/accelerate_ci_config.yaml lerobot/scripts/train.py --config_path=examples/pi05_config.json
opentau-train --accelerate-config=examples/accelerate_ci_config.yaml --config_path=examples/pi05_config.json

.. note::
For advanced users: ``opentau-train`` is a convenience wrapper that invokes ``accelerate launch`` and ``src/opentau/scripts/train.py``. The command above is equivalent to running:

.. code-block:: bash

accelerate launch --config_file examples/accelerate_ci_config.yaml src/opentau/scripts/train.py --config_path=examples/pi05_config.json

Checkpointing and Resuming Training
-----------------------------------
Expand All @@ -31,7 +37,7 @@ Start training and saving checkpoints:

.. code-block:: bash

accelerate launch lerobot/scripts/train.py --config_path=examples/pi05_config.json --output_dir=outputs/train/pi05 --steps 40 --log_freq 5 --save_freq 20
opentau-train --config_path=examples/pi05_config.json --output_dir=outputs/train/pi05 --steps 40 --log_freq 5 --save_freq 20

A checkpoint should be saved at step 40. The checkpoint should be saved in the directory ``outputs/train/pi05/checkpoints/000040/``.

Expand All @@ -47,7 +53,7 @@ Training can be resumed by running:

.. code-block:: bash

accelerate launch lerobot/scripts/train.py --config_path=outputs/train/pi05/checkpoints/000040/train_config.json --resume=true --steps=100
opentau-train --config_path=outputs/train/pi05/checkpoints/000040/train_config.json --resume=true --steps=100

.. note::
When resuming training from a checkpoint, the training step count will continue from the checkpoint's step, but the dataloader will be reset.
3 changes: 3 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,9 @@ dependencies = [
"deepspeed>=0.17.1"
]

[project.scripts]
opentau-train = "opentau.scripts.launch_train:main"

[project.optional-dependencies]
dev = ["pre-commit>=3.7.0",
"debugpy>=1.8.1",
Expand Down
63 changes: 63 additions & 0 deletions src/opentau/scripts/launch_train.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
# Copyright 2026 Tensor Auto Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import argparse
import subprocess
import sys
from pathlib import Path

import opentau.scripts.train as train_script


def main():
parser = argparse.ArgumentParser(
description="Launch OpenTau training with Accelerate",
usage="opentau-train [--accelerate-config CONFIG] [TRAINING_ARGS]",
)
parser.add_argument(
"--accelerate-config", type=str, help="Path to accelerate config file (yaml)", default=None
)
# We use parse_known_args so that all other arguments are collected
# These will be passed to the training script
args, unknown_args = parser.parse_known_args()

# Base command
cmd = ["accelerate", "launch"]

# Add accelerate config if provided
if args.accelerate_config:
cmd.extend(["--config_file", args.accelerate_config])

# Add the path to the training script
# We resolve the path to ensure it's absolute
train_script_path = Path(train_script.__file__).resolve()
cmd.append(str(train_script_path))

# Add all other arguments (passed to the training script)
cmd.extend(unknown_args)

# Print the command for transparency
print(f"Executing: {' '.join(cmd)}")

# Replace the current process with the accelerate launch command
try:
subprocess.run(cmd, check=True)
except subprocess.CalledProcessError as e:
sys.exit(e.returncode)
except KeyboardInterrupt:
sys.exit(130)


if __name__ == "__main__":
main()
Loading