diff --git a/docs/README.md b/docs/README.md
index 68886fd..eb5fec2 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -1,6 +1,6 @@
-# LeRobot Documentation
+# OpenTau Documentation
-This directory contains the Sphinx documentation for the LeRobot project.
+This directory contains the Sphinx documentation for the OpenTau project.
## Installation
@@ -50,4 +50,4 @@ The documentation uses **reStructuredText (.rst)** and **Markdown (.md)** (via M
- **reStructuredText**: The standard for Sphinx. Good for complex directives and autodoc.
- **Markdown**: Supported via MyST. Useful for narrative documentation.
-To update the API documentation, edit the docstrings in the Python code. The `index.rst` file is configured to automatically document the `lerobot` package.
+To update the API documentation, edit the docstrings in the Python code. The `index.rst` file is configured to automatically document the `OpenTau` package.
diff --git a/docs/source/installation.rst b/docs/source/installation.rst
index fef3a75..4dcb43a 100644
--- a/docs/source/installation.rst
+++ b/docs/source/installation.rst
@@ -53,7 +53,7 @@ We recommend using `uv `_ for fast and simple Python
.. code-block:: bash
- uv sync --extra tau0 --extra test --extra video_benchmark --extra accelerate --extra dev --extra feetech --extra openai --extra onnx --extra smolvla --extra libero --extra metaworld
+ uv sync --all-extras
3. **Activate the virtual environment**
diff --git a/docs/source/tutorials/RECAP.rst b/docs/source/tutorials/RECAP.rst
index 29877aa..11b4371 100644
--- a/docs/source/tutorials/RECAP.rst
+++ b/docs/source/tutorials/RECAP.rst
@@ -66,7 +66,7 @@ Command line to run the SFT training:
.. code-block:: bash
- accelerate launch --config_file= lerobot/scripts/train.py --config_path=
+ opentau-train --accelerate-config= --config_path=
Stage 2: Fine-tuning the value function on whole libero dataset till convergence.
@@ -150,7 +150,7 @@ Command line to run the value function training:
.. code-block:: bash
- accelerate launch --config_file= lerobot/scripts/train.py --config_path=
+ opentau-train --accelerate-config= --config_path=
Stage 3: Offline RL training
@@ -218,7 +218,7 @@ Command line to run the value function fine-tuning:
.. code-block:: bash
- accelerate launch --config_file= lerobot/scripts/train.py --config_path=
+ opentau-train --accelerate-config= --config_path=
Sub-stage 3: Compute the advantage for each data point using the fine-tuned value function and calculate the epsilon threshold for setting I\ :sub:`t`\ (Indicator) VLA policy training.
@@ -302,4 +302,4 @@ Command line to run the VLA policy fine-tuning:
.. code-block:: bash
- accelerate launch --config_file= lerobot/scripts/train.py --config_path=
+ opentau-train --accelerate-config= --config_path=
diff --git a/docs/source/tutorials/training.rst b/docs/source/tutorials/training.rst
index fd60a87..e47a5de 100644
--- a/docs/source/tutorials/training.rst
+++ b/docs/source/tutorials/training.rst
@@ -13,7 +13,7 @@ To train a model, run the following command:
.. code-block:: bash
- accelerate launch lerobot/scripts/train.py --config_path=examples/pi05_config.json
+ opentau-train --config_path=examples/pi05_config.json
This uses the default accelerate config file at `~/.cache/huggingface/accelerate/default_config.yaml` which is set by running ``accelerate config``.
@@ -21,8 +21,14 @@ Optionally, to use a specific accelerate config file (instead of the default), r
.. code-block:: bash
- accelerate launch --config_file=examples/accelerate_ci_config.yaml lerobot/scripts/train.py --config_path=examples/pi05_config.json
+ opentau-train --accelerate-config=examples/accelerate_ci_config.yaml --config_path=examples/pi05_config.json
+.. note::
+ For advanced users: ``opentau-train`` is a convenience wrapper that invokes ``accelerate launch`` and ``src/opentau/scripts/train.py``. The command above is equivalent to running:
+
+ .. code-block:: bash
+
+ accelerate launch --config_file examples/accelerate_ci_config.yaml src/opentau/scripts/train.py --config_path=examples/pi05_config.json
Checkpointing and Resuming Training
-----------------------------------
@@ -31,7 +37,7 @@ Start training and saving checkpoints:
.. code-block:: bash
- accelerate launch lerobot/scripts/train.py --config_path=examples/pi05_config.json --output_dir=outputs/train/pi05 --steps 40 --log_freq 5 --save_freq 20
+ opentau-train --config_path=examples/pi05_config.json --output_dir=outputs/train/pi05 --steps 40 --log_freq 5 --save_freq 20
A checkpoint should be saved at step 40. The checkpoint should be saved in the directory ``outputs/train/pi05/checkpoints/000040/``.
@@ -47,7 +53,7 @@ Training can be resumed by running:
.. code-block:: bash
- accelerate launch lerobot/scripts/train.py --config_path=outputs/train/pi05/checkpoints/000040/train_config.json --resume=true --steps=100
+ opentau-train --config_path=outputs/train/pi05/checkpoints/000040/train_config.json --resume=true --steps=100
.. note::
When resuming training from a checkpoint, the training step count will continue from the checkpoint's step, but the dataloader will be reset.
diff --git a/pyproject.toml b/pyproject.toml
index 64a250b..2a98f6a 100644
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -88,6 +88,9 @@ dependencies = [
"deepspeed>=0.17.1"
]
+[project.scripts]
+opentau-train = "opentau.scripts.launch_train:main"
+
[project.optional-dependencies]
dev = ["pre-commit>=3.7.0",
"debugpy>=1.8.1",
diff --git a/src/opentau/scripts/launch_train.py b/src/opentau/scripts/launch_train.py
new file mode 100644
index 0000000..991b37f
--- /dev/null
+++ b/src/opentau/scripts/launch_train.py
@@ -0,0 +1,63 @@
+# Copyright 2026 Tensor Auto Inc. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import argparse
+import subprocess
+import sys
+from pathlib import Path
+
+import opentau.scripts.train as train_script
+
+
+def main():
+ parser = argparse.ArgumentParser(
+ description="Launch OpenTau training with Accelerate",
+ usage="opentau-train [--accelerate-config CONFIG] [TRAINING_ARGS]",
+ )
+ parser.add_argument(
+ "--accelerate-config", type=str, help="Path to accelerate config file (yaml)", default=None
+ )
+ # We use parse_known_args so that all other arguments are collected
+ # These will be passed to the training script
+ args, unknown_args = parser.parse_known_args()
+
+ # Base command
+ cmd = ["accelerate", "launch"]
+
+ # Add accelerate config if provided
+ if args.accelerate_config:
+ cmd.extend(["--config_file", args.accelerate_config])
+
+ # Add the path to the training script
+ # We resolve the path to ensure it's absolute
+ train_script_path = Path(train_script.__file__).resolve()
+ cmd.append(str(train_script_path))
+
+ # Add all other arguments (passed to the training script)
+ cmd.extend(unknown_args)
+
+ # Print the command for transparency
+ print(f"Executing: {' '.join(cmd)}")
+
+ # Replace the current process with the accelerate launch command
+ try:
+ subprocess.run(cmd, check=True)
+ except subprocess.CalledProcessError as e:
+ sys.exit(e.returncode)
+ except KeyboardInterrupt:
+ sys.exit(130)
+
+
+if __name__ == "__main__":
+ main()