Thank you for your excellent job.
I have carefully read your project and tried to export your model to deploy it on CPU via ONNX-Runtime (OPSET=13).
I managed to export the model successfully.
But it seems that the onnx version of YOLO-Master infers a image with a much slower speed than naive Pytorch cpu inference.
Could you provide any clues?