Skip to content

evaluation #53

@who-m4n

Description

@who-m4n

Hi,
I’ve got a little bit confused with the metrics and the evaluate_iterative_forecast function you’ve defined in score.py. Considering that the output of mean(xr.ALL_DIMS) is the average over all the dimensions, in the evaluate_iterative_forecast function, at first, you extract the values for a step according to the steps in the lead_time dimension; then, you shift the time dimension 1 lead_time step (why?); and compute the metric over all the dimensions including longitude, latitude, and time. This means that you compute the mean value for the time dimension as well. So, in that case, you accumulate the error for all time steps in Figure 2 in the paper. Am I right?
Moreover, could you please explain why the RMSE metric for the climatology and weekly climatology methods in Figure 2 are constant over time?
Furthermore, could you please explain what is N_forecast in the RMSE formula written in the paper?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions