Skip to content

Batch inference problem! #87

@hopeux

Description

@hopeux

I retrained the model with 16 batches. Then I converted to ONNX.

ONNX Sample

ort_session = ort.InferenceSession("yunet_16.onnx", providers=['CUDAExecutionProvider'])
input_name = ort_session.get_inputs()[0].name

for i_path in tqdm(os.listdir(images_path)):
    for i in os.listdir(os.path.join(images_path, i_path)):
        if i.lower().endswith(('.png', '.jpg', '.jpeg', '.tiff', '.bmp', '.gif')):
            img = cv.imread(os.path.join(images_path, i_path, i))
            image = cv.resize(img, (128, 128), interpolation=cv.INTER_LINEAR)
            image = np.transpose(image, [2, 0, 1])

            org_images.append(img)
            img_list.append(image)
            if len(img_list) >= 16:
                input_data = np.array(img_list, dtype=np.float32)
                loc, conf, iou = ort_session.run(None, {input_name: input_data})

Main problem ı can not split the loc, conf, iou values for each image.

 example output shape for 1 batch(16)
 loc (15040, 14)
 conf (15040, 2)
 iou (15040, 2)

expected shape for 1 batch(16)
 loc (16, 15040, 14)
 conf (16, 15040, 2)
 iou (16, 15040, 2)

Is someting wrong here? Or model only use 1 batch. I don't get it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions