-
Notifications
You must be signed in to change notification settings - Fork 376
feat: improve engine caching and fix bugs #3932
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@cehongwang please take a pass so we have multiple eyes on this PR |
a54907e to
ea81677
Compare
|
The reason why JIT's output is not all zeros when AOT's weights are stored in the model: but JIT uses placeholder to get the weights on the fly, so there's actually no weights to be stripped. |
narendasan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just make sure tests are passing
d42ef00 to
d3bbb94
Compare
Description
As I requested, TensorRT 10.14 added an argument
trt.SerializationFlag.INCLUDE_REFITto allow refitted engines to keep refittable. That means engines can be refitted multiple times. Based on the capability, this PR enhances the existing engine caching and refitting features as follows:compilation_settings.strip_engine_weights. Then, when users pull out the cached engine, it will be automatically refitted and kept refittable.refit_module_weights(). e.g.:_conversion.py.Type of change
Checklist: