Skip to content

Comments

feat(trainer): run dataset and model initializers in parallel#292

Open
Sayan4496 wants to merge 1 commit intokubeflow:mainfrom
Sayan4496:feat-parallel-initializers
Open

feat(trainer): run dataset and model initializers in parallel#292
Sayan4496 wants to merge 1 commit intokubeflow:mainfrom
Sayan4496:feat-parallel-initializers

Conversation

@Sayan4496
Copy link

Summary

Run dataset and model initializer containers in parallel in the container backend instead of sequential execution.

Motivation

Previously, when both dataset and model initializers were configured, they executed sequentially, increasing total startup time.
Since Docker/Podman allow multiple containers to mount the same volume simultaneously, running them in parallel reduces initialization latency to approximately the maximum of the two durations rather than their sum.

Changes

  • Execute dataset and model initializers concurrently using ThreadPoolExecutor.
  • Wait for all initializers to complete and propagate any failure.
  • Preserve existing timeout, logging, and cleanup behavior.
  • Maintain backward compatibility when only one initializer is configured.
  • Add debug log after successful completion of all initializers.

Testing

  • Trainer test suite passes locally:
  • 184 passed
  • No regressions observed in container backend behavior.

Fixes #290

Copilot AI review requested due to automatic review settings February 12, 2026 18:07
@google-oss-prow
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign astefanutti for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@github-actions
Copy link
Contributor

🎉 Welcome to the Kubeflow SDK! 🎉

Thanks for opening your first PR! We're happy to have you as part of our community 🚀

Here's what happens next:

  • If you haven't already, please check out our Contributing Guide for repo-specific guidelines and the Kubeflow Contributor Guide for general community standards
  • Our team will review your PR soon! cc @kubeflow/kubeflow-sdk-team

Join the community:

Feel free to ask questions in the comments if you need any help or clarification!
Thanks again for contributing to Kubeflow! 🙏

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the container backend to reduce startup latency by running dataset and model initializer containers concurrently instead of sequentially.

Changes:

  • Execute dataset and model initializers in parallel via ThreadPoolExecutor.
  • Wait for initializer completion and propagate failures to the caller.
  • Add a debug log after all initializers finish successfully.

Comment on lines +573 to +576
# Wait for all initializers to complete and propagate errors
for future in as_completed(futures):
future.result()

Copy link

Copilot AI Feb 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When iterating as_completed(futures), the first failing initializer raises immediately, which means exceptions from any other initializer future are never consumed (and the surfaced failure becomes completion-order dependent), making debugging nondeterministic when multiple initializers fail; consider capturing results for all futures (e.g., map future→name, collect exceptions from every future.result() in a list, then raise a combined/deterministic error after all have finished).

Suggested change
# Wait for all initializers to complete and propagate errors
for future in as_completed(futures):
future.result()
# Wait for all initializers to complete and collect errors deterministically
exceptions: list[Exception] = []
for future in as_completed(futures):
try:
future.result()
except Exception as exc:
exceptions.append(exc)
if exceptions:
if len(exceptions) == 1:
raise RuntimeError("Initializer failed") from exceptions[0]
messages = "\n".join(
f"{idx + 1}) {type(exc).__name__}: {exc}"
for idx, exc in enumerate(exceptions)
)
raise RuntimeError(
f"Multiple initializers failed:\n{messages}"
)

Copilot uses AI. Check for mistakes.
@Sayan4496 Sayan4496 force-pushed the feat-parallel-initializers branch from b84f0e2 to 987ee5d Compare February 13, 2026 15:14
Signed-off-by: Sayan Deyashi <deyashisayan2@gmail.com>
@Sayan4496 Sayan4496 force-pushed the feat-parallel-initializers branch from 987ee5d to 85e110d Compare February 13, 2026 15:21
@Sayan4496 Sayan4496 changed the title feat(container): run dataset and model initializers in parallel feat(trainer): run dataset and model initializers in parallel Feb 13, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feat(container): Run dataset and model initializers in parallel

1 participant