Skip to content

Add accuracy metric to results stats summary #30

@mccarthy-m-g

Description

@mccarthy-m-g

For predictive performance. The main challenge for this one is defining the API, since unlike other error metrics, the user needs to define the absolute and relative error margins for whether a prediction is considered accurate or not. Implementation options:

  • Only calculate accuracy if the user has supplied the absolute and relative error margins. This lets us avoid defining universally suitable defaults, which is probably difficult to do appropriately.
  • Define universally suitable defaults and always calculate accuracy.

I think the first option would be better/easier for now.

For the API, we could maybe do something similar to the .vpc_options argument in run_eval()?

run_eval(..., .stats_summ_options = stats_summ_options(acc_error_abs = NULL, acc_error_rel = NULL, ...)

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions