Skip to content

Evaluation metrics explanation #26

@ojasvijain

Description

@ojasvijain

Hi Team,

I was wondering how you are computing the metrics for evaluation. I was going through metrics.py file and came across ap_per_class function which seems to be computing the average precision for each class in an image. (FYI - my custom dataset only has 1 class with a lot of objects of that class in a single image)
I wanted to understand what *stats is (the parameter passed in the function) in test.py? And how does it help in being able to assign a predicted class to a ground truth?

Also,
I wanted to know how you are associating a particular predicted class with a ground truth? Is it solely based on the highest iou values? If yes, what if you assign a ground truth to a particular predicted class (and eliminate it from the iteration once it is assigned) and find a higher iou to another predicted class further down the iteration?

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions