-
Notifications
You must be signed in to change notification settings - Fork 302
Minor improvements to avoid out of memory issues in stable_diffusion.ipynb notebook (attempt 2) #5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Removing unnecessary additional memory usage by replacing separate Dreambooth db_pipe assignment with pipe, and then deleting that pipe before running "Looking inside the pipeline" section. Models in "Looking inside the pipeline" set to fp16 to further memory efficiency. Combined the changes allow for running the notebook beginning to end on a 11GB 1080TI gpu.
|
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
|
Thanks for this @jantic, I should have been more considerate and tested on my own 11 GB GPU! One minor question: is the |
|
@pcuenca I've made the update you've requested. Can you expand upon your thinking about favoring fp16 over the autocast? I just want to know how you are thinking about it going forward so I don't make the same sort of mistake.
Hey, my previous attempt on this pull request was dumping a whole bunch of changed images into the commit history. And that's just the start of a long list of things I regularly screw up as I stumble like a drunkard towards software that works :) . We'll get there- that's what I keep saying. |
|
I did omit one thing that might be important in the description: The change from db_pipe to pipe means that the output under "Latents and callbacks" will be different. It still looks decent from my perspective but not sure if this is acceptable. Now that I look at it, it's making the last image blank with the safety filter it appears. I can change the seed and find a better result if you desire. |
|
Yeah maybe you or someone in a future PR could try to find a pic of me that's not NSFW... ;) Many thanks for this PR @jantic ! |
I'm just repeating what I saw here :) Apparently the overhead to copy and cast the tensors adds up to something not negligible. So if inference works in
Isn't that what we all do? :) |
|
@pcuenca Thanks for the explanation! I definitely noticed the slowdown as well when trying it elsewhere in the notebooks. It was a bit surprising actually. |


Removing unnecessary additional memory usage by replacing separate Dreambooth db_pipe assignment with pipe, and then deleting that pipe before running "Looking inside the pipeline" section.
Models in "Looking inside the pipeline" set to fp16 to further memory efficiency.
Combined the changes allow for running the notebook beginning to end on a 11GB 1080TI gpu.