fix: remove unused other_mimi instance to save ~200MB GPU memory#49
Open
ThanhNguyxn wants to merge 1 commit intoNVIDIA:mainfrom
Open
fix: remove unused other_mimi instance to save ~200MB GPU memory#49ThanhNguyxn wants to merge 1 commit intoNVIDIA:mainfrom
ThanhNguyxn wants to merge 1 commit intoNVIDIA:mainfrom
Conversation
Fixes NVIDIA#46 The `other_mimi` MimiModel instance was: - Instantiated but its encode/decode outputs were discarded - Consuming ~200MB additional GPU memory unnecessarily - Running redundant computations on every audio frame This PR removes all references to `other_mimi` from: - server.py: ServerState class, warmup(), handle_chat() - offline.py: warmup(), decode_tokens_to_pcm(), run_inference() Memory savings: ~200MB GPU RAM per running instance
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Fixes #46 - Remove unused
other_mimiMimiModel instance that wastes ~200MB GPU memoryProblem Analysis
After careful code review, the
other_mimiinstance in bothserver.pyandoffline.py:Is instantiated identically to the primary
mimi:Has its outputs discarded (assigned to
_):Consumes significant resources:
Investigation
I verified there are no hidden side effects:
other_mimi.encode()andother_mimi.decode()return values are never usedother_mimiis reset but never readmimiandother_mimiChanges
server.pyother_mimifromServerStatedataclass fieldsother_mimifrom__init__parametersother_mimi.streaming_forever(1)callother_mimi.encode(chunk)inwarmup()andopus_loop()other_mimi.decode(tokens)inwarmup()andopus_loop()other_mimi.reset_streaming()callother_mimiinstantiation inmain()other_mimifromServerState()constructor calloffline.pyother_mimiparameter fromwarmup()functionother_mimiparameter fromdecode_tokens_to_pcm()functionother_mimi.encode()call inwarmup()other_mimi.decode()calls inwarmup()anddecode_tokens_to_pcm()other_mimiinstantiation inrun_inference()other_mimi.streaming_forever(1)callother_mimi.reset_streaming()callother_mimiMemory Impact
Backward Compatibility
✅ Fully backward compatible - no API changes, no behavioral changes
Testing Recommendation
Both should work identically but with ~200MB less GPU memory usage.