-
Notifications
You must be signed in to change notification settings - Fork 2
Flux1 dev try #3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
Conversation
License: Apache 2.0 original author: Tim Dockhorn @timudk
|
Thank you for your efforts! However, I can't make it work. If I load the Civitai version (11 GB) it's recognized as SD1 and it loads the v1 config, then quickly renders completely gray output. If I load the official fp8 version (17 GB), it loads the Flux config and rendering is slower. However, the result is still the same, a gray image. I suppose it doesn't use T5 and VAE, it's not clear where they should be put (there are no errors whatsoever in the log). I see there are some environment vars in the code that probably contain paths? I have 3090 Ti / Debian testing. |
|
11G model has only U net weights and not compatible with A1111 and yes T5xxl text encoder not used at all , could be activated with the sd3 t5 option recently added and you need to install official as.safetensors VAE and select it at the vae ui to fix grayed result |
|
Ah, interesting! Yes, I bound the VAE and now it works. I also tried enabling T5 but it OOMs on generation, I assume the encoder isn't unloaded as it is in CUI so they both don't fit in the VRAM. Hopefully this can be fixed relatively easily. |
|
Can run with T5XXL enabled but only with PS: I think it's worth adding another flag such as |
4bce6b3 to
c97d652
Compare
|
I uploaded the model, added ae.safetensors to the VAE folder, selected it in settings, but it gives this error |
|
You probably need to update your torch |
- check float8 unet dtype to save memory - check vae dtype
* add QkvLinear class for Flux lora
* devices.dtype_unet, dtype_vae could be considered as storage dtypes * use devices.dtype_inference as computational dtype * misc fixes to support float8 unet storage
c97d652 to
9b598e6
Compare
Ok. My bad. Now I have another problem: when I try to select ae.safetensors in vae I get this |
* replace rearrange to view AUTOMATIC1111#15804 * see also lllyasviel/stable-diffusion-webui-forge@79adfa8 * conditional use torch.rms_norm for torch 2.4 * fix RMSNorm() for clear: use torch.ones()
41b68ac to
c5d84c4
Compare
|
you don't have to use VAE separatly now. (VAE baked checkpoint correctly supported now) and there is some A1111 bug exist to prevent changing VAE. |
12abd36 to
deaad69
Compare
539dd98 to
5cb6200
Compare

FLUX1 support
⚠experimental Flux1 support
Usage
download AutoEncoder from huggingface repo https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/ae.safetensors (FP32)ChangeLog