Replies: 1 comment
-
|
Thanks @tc-git-1. I think that is a very valuable observation and we have seen indeed people "confusing" traditional methods of cross validation for ML with Time Series Cross Validation. Are there any specific examples and or diagrams you have in mind? (Feel free to share a draft or even a painted napking). |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
In FPPPY sections such as 5.10 Time Series Cross-Validation, 14.7 Hyperparameter Optimization, 15.4 Example: Electricity Price Forecasting, and maybe other sections, might it be helpful to explicitly demonstrate the use of train, validation, and test datasets, and also nested cross-validation?
This could clarify the modeling workflow by distinguishing the role of each component:
Training set: used to fit candidate models.
Validation set: used to tune hyperparameters or compare models.
Test set: used to evaluate final model performance.
Nested cross-validation: used to separate tuning from evaluation, ensuring unbiased performance estimates.
Including examples and diagrams could help make these distinctions clearer, especially for readers with a machine-learning background.
Beta Was this translation helpful? Give feedback.
All reactions