Skip to content

Refactor(handler part 15): extract training preset switching into dedicated mixin#620

Merged
ChuxiJ merged 3 commits intoace-step:mainfrom
1larity:feat/handler-decomp-part-15
Feb 17, 2026
Merged

Refactor(handler part 15): extract training preset switching into dedicated mixin#620
ChuxiJ merged 3 commits intoace-step:mainfrom
1larity:feat/handler-decomp-part-15

Conversation

@1larity
Copy link
Contributor

@1larity 1larity commented Feb 16, 2026

Summary

Extract switch_to_training_preset from AceStepHandler into a dedicated TrainingPresetMixin to continue handler decomposition while preserving behavior.

What Changed

  • Added TrainingPresetMixin:
    • acestep/core/generation/handler/training_preset.py
  • Wired mixin into handler composition:
    • acestep/core/generation/handler/__init__.py
    • acestep/handler.py
  • Removed inline switch_to_training_preset implementation from AceStepHandler.
  • Added focused unit tests:
    • acestep/core/generation/handler/training_preset_test.py
  • Follow-up fix: preserve MLX behavior on preset switch by forwarding use_mlx_dit from cached init params.

Behavioral Notes

  • Public interface is unchanged: AceStepHandler.switch_to_training_preset(...) remains available via mixin inheritance.
  • switch_to_training_preset now explicitly forwards:
    • prefer_source (existing behavior)
    • use_mlx_dit (regression fix for MLX users)

Tests

  • python acestep/core/generation/handler/training_preset_test.py (6 tests passing)
  • Attempted broader run python acestep/core/generation/handler/init_service_test.py, but this environment lacks torchaudio (import failure unrelated to this PR logic).

Scope / Risk

  • Single-responsibility decomposition only.
  • No non-target runtime-path changes beyond preserving use_mlx_dit passthrough on preset switching.
  • Low risk, with focused regression coverage.

Summary by CodeRabbit

  • New Features

    • Added a training-safe preset mode that disables quantization for compatible inference setups and is now available on the handler interface.
  • Tests

    • Added comprehensive unit tests covering preset switching, edge cases, error propagation, and preservation of cached init parameters.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 16, 2026

📝 Walkthrough

Walkthrough

Extracts training-preset logic from AceStepHandler into a new exported TrainingPresetMixin; integrates the mixin into AceStepHandler and adds unit tests validating reinitialization behavior and error cases. (49 words)

Changes

Cohort / File(s) Summary
Mixin implementation
acestep/core/generation/handler/training_preset.py, acestep/core/generation/handler/__init__.py
Adds TrainingPresetMixin with switch_to_training_preset() that reconstructs init params with quantization=None and calls initialize_service; exports mixin in package __all__.
Handler integration
acestep/handler.py
Removes switch_to_training_preset from AceStepHandler and adds TrainingPresetMixin to its base classes.
Tests
acestep/core/generation/handler/training_preset_test.py
Adds unit tests covering: disabled-quantization early return, missing last_init_params, successful reinit with quantization=None, initialize_service error propagation, handling of absent prefer_source, and immutability of cached params.

Sequence Diagram(s)

(omitted)

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • ChuxiJ

Poem

🐰 I hop from code to line,
A mixin neat, a preset fine,
Quantization set to none,
Tests run bright beneath the sun,
Hooray — a cleaner handler, done!

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and specifically describes the main change: extracting training preset switching functionality into a dedicated mixin as part of handler refactoring.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Tip

Issue Planner is now in beta. Read the docs and try it out! Share your feedback on Discord.


Comment @coderabbitai help to get the list of available commands and usage tips.

@1larity 1larity changed the title WIP: handler decomposition part 15 Refactor(handler part 15): extract training preset switching into dedicated mixin Feb 16, 2026
@1larity 1larity marked this pull request as ready for review February 16, 2026 23:41
@ChuxiJ
Copy link
Contributor

ChuxiJ commented Feb 17, 2026

conflicts need to solve

@1larity 1larity force-pushed the feat/handler-decomp-part-15 branch from 788fc13 to 4369fdc Compare February 17, 2026 09:13
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
acestep/core/generation/handler/training_preset_test.py (1)

10-35: Consider documenting why custom module loading is used.

The custom module loading approach isolates the test from heavyweight package imports (e.g., torch, torchaudio). A brief inline comment explaining this rationale would help future maintainers understand this pattern.

📝 Suggested documentation addition
 def _load_training_preset_module():
-    """Load training_preset module directly from file to avoid package side effects."""
+    """Load training_preset module directly from file to avoid package side effects.
+    
+    This bypasses the heavyweight acestep package imports (torch, torchaudio, etc.)
+    allowing fast, isolated unit tests without GPU or audio library dependencies.
+    """
     repo_root = Path(__file__).resolve().parents[4]
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@acestep/core/generation/handler/training_preset_test.py` around lines 10 -
35, Add a brief inline comment at the top of the _load_training_preset_module
function explaining that it manually constructs package entries and loads
training_preset.py to avoid importing the full aconda/package graph (which pulls
heavyweight dependencies like torch/torchaudio) so tests remain fast and
isolated; mention that this prevents package side-effects and ensures the test
only loads the target module.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@acestep/core/generation/handler/training_preset_test.py`:
- Around line 10-35: Add a brief inline comment at the top of the
_load_training_preset_module function explaining that it manually constructs
package entries and loads training_preset.py to avoid importing the full
aconda/package graph (which pulls heavyweight dependencies like
torch/torchaudio) so tests remain fast and isolated; mention that this prevents
package side-effects and ensures the test only loads the target module.

@1larity
Copy link
Contributor Author

1larity commented Feb 17, 2026

@ChuxiJ Conflict resolved.

@ChuxiJ ChuxiJ merged commit ecfdc24 into ace-step:main Feb 17, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants