Skip to content

Conversation

@wkpark
Copy link
Owner

@wkpark wkpark commented Sep 13, 2024

Description

  • a simple description of what you're trying to accomplish
  • a summary of changes in code
  • which issues it fixes, if any

Screenshots/videos:

Checklist:

@wkpark wkpark force-pushed the minimal-flux-with-fp8-freeze branch from 015e20a to de9497a Compare September 17, 2024 03:04
@wkpark wkpark force-pushed the minimal-flux-with-fp8-freeze branch 4 times, most recently from c872b26 to e8a6c28 Compare September 28, 2024 14:53
Github -> GitHub
@wkpark wkpark force-pushed the minimal-flux-with-fp8-freeze branch 3 times, most recently from a765b8f to 8f45d13 Compare October 22, 2024 10:32
w-e-w and others added 21 commits October 24, 2024 09:05
chore(js): avoid lots of `Wake Lock is not supported.`
Fixes a small oversight I made.
License: Apache 2.0
original author: Tim Dockhorn @timudk
wkpark added 29 commits November 1, 2024 12:34
 * some T5XXL do not have encoder.embed_tokens.weight. use shared.weight embed_tokens instead.
 * use float8 text encoder t5xxl_fp8_e4m3fn.safetensors
 * fixed some mistake
 * some ai-toolkit's lora do not have proj_mlp
…cast()

 * add nn.Embedding in the devices.autocast()
 * do not cast forward args for some cases
 * add copy option in the devices.autocast()
@wkpark wkpark force-pushed the minimal-flux-with-fp8-freeze branch from 8f45d13 to 310d0e6 Compare November 1, 2024 03:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants