feat: Validation Split & MLflow Tracking (fixes #22)#78
feat: Validation Split & MLflow Tracking (fixes #22)#78verdhanyash wants to merge 1 commit intoetsi-ai:mainfrom
Conversation
|
Thank you for opening this PR! Our automated system is currently verifying the PR requirements. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
5 similar comments
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
Validation Successful!This pull request has been verified and linked to issue #22. The system is now synchronizing metadata from the referenced issue. Kindly await maintainer review of your changes. |
|
Good Work @verdhanyash |
Fixes
Closes: #22
Type of Change
Description
Implemented validation split and MLflow tracking for monitoring model generalization performance.
Changes:
validation_split: float = 0.2parameter toModel.train()methodprogress_callback— training loop stays entirely in Rust, preserving optimizer state and avoiding FFI overhead_calculate_validation_loss()method supporting both classification (cross-entropy) and regression (MSE)forward()method to expose raw model outputs for validation loss calculationsave_model()to log bothlossandval_lossmetrics to MLflow with epoch stepsvalidation_splitparameter rangeval_loss_historyon model instance, initialized in__init__()andload()Key architectural decisions (addressing PR #62 feedback):
rust_model.train()called exactly onceprogress_callback(no extra FFI overhead)np.random.default_rng(seed)for reproducibilityResult: MLflow dashboard now displays overlapping train/validation loss curves for monitoring overfitting and generalization.
How Has This Been Tested?
test_validation_split.pywith 8 tests covering classification, regression, edge cases, seed reproducibility, and MLflow loggingforward()method works correctly with Python APIScreenshots / Logs
Contribution Context