Skip to content

Issues about deploy yolo master via ONNX-Runtime #12

@cydiachen

Description

@cydiachen

Thank you for your excellent job.
I have carefully read your project and tried to export your model to deploy it on CPU via ONNX-Runtime (OPSET=13).
I managed to export the model successfully.
But it seems that the onnx version of YOLO-Master infers a image with a much slower speed than naive Pytorch cpu inference.
Could you provide any clues?

Metadata

Metadata

Assignees

No one assigned

    Labels

    EnvAbout Environment

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions