Skip to content

Conversation

@bbkx226
Copy link

@bbkx226 bbkx226 commented Dec 14, 2025

Resolves #11

This pull request introduces Ray-based distributed training support to the project, enabling scalable multi-node and multi-GPU training with improved fault tolerance and checkpointing. The core addition is the new ray_run.py script, which implements a Ray Train-compatible training loop and integrates with Ray Data for efficient data loading. The documentation and requirements have been updated accordingly, and a new argument for gradient accumulation has been added.

Ray Distributed Training Integration:

  • Added a new script, ray_run.py, implementing distributed training using Ray Train and Torch DDP, with support for multi-node and multi-GPU setups, checkpointing, and integration with Ray Data for efficient batch loading. The training loop mirrors the existing accelerate-based approach but is adapted for Ray's distributed environment.
  • Updated requirements.txt to include Ray Train and related dependencies, with version constraints and platform-specific installation for flash-attn.

Documentation Updates:

  • Expanded the README.md with a new section on Ray distributed training, including installation steps, usage instructions, and notes on checkpointing and data compatibility.

Training Configuration Improvements:

  • Added a gradient_accumulation_steps argument to the Args class in arguments.py to support gradient accumulation in both Ray and accelerate pipelines.

@bbkx226
Copy link
Author

bbkx226 commented Dec 14, 2025

#11

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Codefuse开源轻训营] Support for Ray distributed training framework

1 participant