Skip to content

HF whisper TF Model to quantized TFLIte(not working) #4

@nyadla-sys

Description

@nyadla-sys

@bhadreshpsavani
I was able to convert from Hugging face whisper onnx to tflite(int8) model,however am not sure how to run the inference on this model
Could you please review and let me know if there is anything i am missing in onnx to tflite conversion
##ONNx to int8 model
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/whisper_to_onnx_tflite_int8.ipynb

##TF to Hybrid TFLIte model
https://colab.research.google.com/github/usefulsensors/openai-whisper/blob/main/notebooks/tflite_from_huggingface_whisper.ipynb

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions