Skip to content

Feature Request: ONNX Export Support for Cloud VLM Backbones #52

@jian-dong

Description

@jian-dong

Hi.
First, thank you for your excellent work and for open-sourcing this repository. Your tooling for VLA training and the existing ONNX export support for the action decoder is very valuable for deployment workflows.

I noticed that the current ONNX export script focuses on the action decoder component. Could you share whether there are any plans to extend ONNX export support to include Cloud VLM backbone models as well? This would greatly simplify inference deployment in backend systems using TensorRT-LLM for VLMs alongside TensorRT for the action decoder.

Specifically:

  • Is there an intended roadmap for Cloud VLM → ONNX export?
  • Would you recommend any interim approach for exporting the VLM backbone to ONNX for TensorRT-LLM-based deployment?

Thank you for your time and contributions

Metadata

Metadata

Assignees

No one assigned

    Labels

    featureNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions