[ADD] Update comfy to the last version, all features functionnal#13
[ADD] Update comfy to the last version, all features functionnal#13TristanValentin07 wants to merge 1 commit intomainfrom
Conversation
|
Caution Review failedFailed to post review comments WalkthroughAdds large new model subsystems (audio, video, ACE/Chroma/Cosmos), database migration support, expanded CI/workflows and Windows packaging, extensive CLI and type-system additions, new app services (logging, terminal, model/user managers), many new configs/docs, and widespread transformer-options plumbing across model code. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant API as InternalRoutes
participant Terminal as TerminalService
note right of Terminal:#D3E4CD TerminalService maintains subscriptions and pushes logs
Client->>API: POST /logs/subscribe { client_id, enable }
API->>Terminal: subscribe(client_id) / unsubscribe(client_id)
Terminal-->>API: update_size() (optional)
Terminal->>Client: send_sync "logs" (entries, size) [when on_flush invoked]
note right of API:#F7F4EA InternalRoutes queries terminal_service for raw logs or formatted logs
Client->>API: GET /logs
API->>Terminal: get_logs()
Terminal-->>API: logs_string
API-->>Client: 200 OK (logs_string)
Notes:
Estimated code review effort🎯 5 (Critical) | ⏱️ ~150 minutes Rationale: Very large, heterogeneous set of changes including many new model families, transformer plumbing across numerous modules, database/migration infrastructure, CI/release workflows, core app services (logging, terminal, model/user managers), and expanded type/CLI surfaces. Reviewing correctness, API compatibility, and runtime behavior across these domains will be time-consuming. Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 122
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (18)
comfy/comfy/ldm/audio/autoencoder.py (1)
83-96: Critical: UndefinedActivation1dwill cause runtime crash.The
noqa: F821comment suppresses the linter, but whenantialias=Trueis passed toget_activation(), line 94 will raiseNameError: name 'Activation1d' is not definedat runtime. Either import/defineActivation1dor remove the antialias functionality entirely.Verify the situation with the following script:
#!/bin/bash # Description: Check if Activation1d is defined and if antialias is used # Search for Activation1d class definition echo "=== Searching for Activation1d definition ===" ast-grep --pattern 'class Activation1d($$$) { $$$ }' rg -n "class Activation1d" --type py # Search for get_activation calls with antialias=True echo -e "\n=== Searching for antialias=True usage ===" rg -n "antialias\s*=\s*True" --type py -C2 # Search for any imports of Activation1d echo -e "\n=== Searching for Activation1d imports ===" rg -n "from.*import.*Activation1d|import.*Activation1d" --type pycomfy/comfy/ldm/cascade/stage_c.py (1)
185-214: Fix mutable default argument in _down_encode.The method signature uses
transformer_options={}, which is a mutable default argument anti-pattern.Apply this diff:
- def _down_encode(self, x, r_embed, clip, cnet=None, transformer_options={}): + def _down_encode(self, x, r_embed, clip, cnet=None, transformer_options=None): + if transformer_options is None: + transformer_options = {} level_outputs = []Apply the same fix to
_up_decode(line 216) andforward(line 250).comfy/comfy/ldm/cascade/stage_b.py (1)
176-200: Fix mutable default argument in _down_encode.The method signature uses
transformer_options={}, which is a mutable default argument anti-pattern.Apply this diff:
- def _down_encode(self, x, r_embed, clip, transformer_options={}): + def _down_encode(self, x, r_embed, clip, transformer_options=None): + if transformer_options is None: + transformer_options = {} level_outputs = []Apply the same fix to
_up_decode(line 202) andforward(line 231).comfy/comfy/ldm/audio/dit.py (2)
137-139: Critical: inv_freq is uninitialized (breaks RoPE).
self.inv_freqis created withtorch.empty(...)and never filled, yielding garbage frequencies. Initialize it deterministically.Apply:
- # inv_freq = 1. / (base ** (torch.arange(0, dim, 2).float() / dim)) - self.register_buffer('inv_freq', torch.empty((dim // 2,), device=device, dtype=dtype)) + inv = 1.0 / (base ** (torch.arange(0, dim, 2, device=device, dtype=torch.float32) / dim)) + if dtype is not None: + inv = inv.to(dtype) + self.register_buffer('inv_freq', inv)
169-176: Critical: seq_len is undefined in XPOS branch.
seq_lenisn’t defined inforward; uset.shape[0].Apply:
- power = (torch.arange(seq_len, device = device) - (seq_len // 2)) / self.scale_base # noqa: F821 seq_len is not defined + seq_len = t.shape[0] + power = (torch.arange(seq_len, device=device) - (seq_len // 2)) / self.scale_basecomfy/comfy/ldm/aura/mmdit.py (1)
409-420: Ceil division assumes patch_size==2; fix for general patch sizesUsing (dim + 1)//patch_size only works for patch_size=2. Switch to ceil-div to avoid shape/view errors when patch_size≠2.
Apply:
@@ def patchify(self, x): - x = x.view( + H_p = (H + self.patch_size - 1) // self.patch_size + W_p = (W + self.patch_size - 1) // self.patch_size + x = x.view( B, C, - (H + 1) // self.patch_size, + H_p, self.patch_size, - (W + 1) // self.patch_size, + W_p, self.patch_size, ) @@ def apply_pos_embeds(self, x, h, w): - h = (h + 1) // self.patch_size - w = (w + 1) // self.patch_size + p = self.patch_size + h = (h + p - 1) // p + w = (w + p - 1) // p @@ def _forward(self, x, timestep, context, transformer_options={}, **kwargs): - x = self.unpatchify(x, (h + 1) // self.patch_size, (w + 1) // self.patch_size)[:,:,:h,:w] + p = self.patch_size + x = self.unpatchify( + x, + (h + p - 1) // p, + (w + p - 1) // p + )[:, :, :h, :w]Also applies to: 422-438, 506-506
comfy/comfy/k_diffusion/sampling.py (1)
677-726: Same clamp for DPM++(2S) ancestral RF.- renoise_coeff = (sigmas[i+1]**2 - sigma_down**2*alpha_ip1**2/alpha_down**2)**0.5 + renoise_coeff = sigmas[i+1].pow(2) - sigma_down.pow(2) * (alpha_ip1.pow(2) / alpha_down.pow(2)) + renoise_coeff = renoise_coeff.clamp_min(0).sqrt()comfy/app/frontend_management.py (1)
170-198: Secure ZIP handling + stream download (fix Zip Slip, avoid loading whole file in memory).Current extractall() is vulnerable to path traversal and uses response.content (memory spike). Stream to a temp file and safely extract with path checks.
@@ - with tempfile.TemporaryFile() as tmp_file: - headers = {"Accept": "application/octet-stream"} - response = requests.get( - asset_url, headers=headers, allow_redirects=True, timeout=REQUEST_TIMEOUT - ) - response.raise_for_status() # Ensure we got a successful response - - # Write the content to the temporary file - tmp_file.write(response.content) - - # Go back to the beginning of the temporary file - tmp_file.seek(0) - - # Extract the zip file content to the destination path - with zipfile.ZipFile(tmp_file, "r") as zip_ref: - zip_ref.extractall(destination_path) + with tempfile.TemporaryFile() as tmp_file: + headers = { + "Accept": "application/octet-stream", + "User-Agent": "FooocusPlus-FrontendFetcher", + } + with requests.get( + asset_url, + headers=headers, + allow_redirects=True, + timeout=REQUEST_TIMEOUT, + stream=True, + ) as response: + response.raise_for_status() + for chunk in response.iter_content(chunk_size=1024 * 1024): + if chunk: + tmp_file.write(chunk) + + tmp_file.seek(0) + + def _safe_extract(zipf: zipfile.ZipFile, dest: str) -> None: + base = Path(dest).resolve() + for member in zipf.infolist(): + target = (base / member.filename).resolve() + if not str(target).startswith(str(base)): + raise ValueError(f"Unsafe path in zip: {member.filename}") + zipf.extract(member, base) + + with zipfile.ZipFile(tmp_file, "r") as zip_ref: + _safe_extract(zip_ref, destination_path)comfy/comfy/gligen.py (5)
90-93: Missingheadsin CrossAttention (breaks head shaping).
GatedSelfAttentionDense2acceptsn_headsbut does not pass it toCrossAttention, causing implicit/default heads that may mismatchd_headandquery_dim.Apply this fix:
- self.attn = CrossAttention( - query_dim=query_dim, context_dim=query_dim, dim_head=d_head, operations=ops) + self.attn = CrossAttention( + query_dim=query_dim, + context_dim=query_dim, + heads=n_heads, + dim_head=d_head, + operations=ops + )
141-151: Device/dtype bug in FourierEmbedder (CPU tensor mixed with GPU input).
self.freq_bandsis created on CPU and reused; multiplying with GPUxwill raise device mismatch. Fix by moving frequencies tox.device/x.dtypeat call time.class FourierEmbedder(): def __init__(self, num_freqs=64, temperature=100): self.num_freqs = num_freqs self.temperature = temperature self.freq_bands = temperature ** (torch.arange(num_freqs) / num_freqs) @torch.no_grad() def __call__(self, x, cat_dim=-1): "x: arbitrary shape of tensor. dim: cat dim" - out = [] - for freq in self.freq_bands: + freqs = self.freq_bands.to(device=x.device, dtype=x.dtype) + out = [] + for freq in freqs: out.append(torch.sin(freq * x)) out.append(torch.cos(freq * x)) return torch.cat(out, cat_dim)
113-118: Robust square check; avoid float equality and asserts.
math.sqrt+ float equality is brittle;assertcan be stripped with-O. Usemath.isqrtand raiseValueErrorfor clearer errors.- size_v = math.sqrt(N_visual) - size_g = math.sqrt(N_ground) - assert int(size_v) == size_v, "Visual tokens must be square rootable" - assert int(size_g) == size_g, "Grounding tokens must be square rootable" - size_v = int(size_v) - size_g = int(size_g) + size_v = math.isqrt(N_visual) + size_g = math.isqrt(N_ground) + if size_v * size_v != N_visual: + raise ValueError(f"Visual tokens ({N_visual}) must form a square grid") + if size_g * size_g != N_ground: + raise ValueError(f"Grounding tokens ({N_ground}) must form a square grid")
220-235: Guard against exceedingmax_objsand normalize devices before cat.
- Writing
masks[len(boxes)] = 1.0will overflow iflen(position_params) > max_objs.positive_embeddingstensors may be on non-CPU devices; mixing with CPU zeros intorch.catcan error.- for p in position_params: + for p in position_params[: self.max_objs]: x1 = (p[4]) / w y1 = (p[3]) / h x2 = (p[4] + p[2]) / w y2 = (p[3] + p[1]) / h masks[len(boxes)] = 1.0 - boxes += [torch.tensor((x1, y1, x2, y2)).unsqueeze(0)] - positive_embeddings += [p[0]] + boxes += [torch.tensor((x1, y1, x2, y2), device="cpu").unsqueeze(0)] + positive_embeddings += [p[0].to("cpu")]Optional: cast
masksonce to the same dtype as boxes/embeddings before arithmetic:- masks = masks.unsqueeze(-1) + masks = masks.to(dtype=boxes.dtype).unsqueeze(-1)
288-299:wmay be undefined when position_net weights are absent.If
"position_net.null_positive_feature"is not insd,wis never created, butw.position_netis used, raisingUnboundLocalError.if "position_net.null_positive_feature" in sd_k: in_dim = sd["position_net.null_positive_feature"].shape[0] out_dim = sd["position_net.linears.4.weight"].shape[0] class WeightsLoader(torch.nn.Module): pass w = WeightsLoader() w.position_net = PositionNet(in_dim, out_dim) w.load_state_dict(sd, strict=False) - - gligen = Gligen(output_list, w.position_net, key_dim) + position_net = w.position_net + else: + # Fallback: derive dims from key_dim + position_net = PositionNet(key_dim, key_dim) + + gligen = Gligen(output_list, position_net, key_dim)comfy/app/app_settings.py (1)
50-55: Validate request body type and harden writes.Ensure POST bodies are JSON objects; return 400 on invalid JSON; write with utf‑8 via json.dump.
@routes.post("/settings") async def post_settings(request): - settings = self.get_settings(request) - new_settings = await request.json() - self.save_settings(request, {**settings, **new_settings}) + settings = self.get_settings(request) + try: + new_settings = await request.json() + except json.JSONDecodeError: + return web.Response(status=400, text="Invalid JSON") + if not isinstance(new_settings, dict): + return web.Response(status=400, text="JSON object expected") + self.save_settings(request, {**settings, **new_settings}) return web.Response(status=200)Additionally:
- with open(file, "w") as f: - f.write(json.dumps(settings, indent=4)) + with open(file, "w", encoding="utf-8") as f: + json.dump(settings, f, indent=4, ensure_ascii=False)Also applies to: 65-65
comfy/comfy/extra_samplers/uni_pc.py (2)
495-505: rks construction is tensor-shaped; breaks C matrix inversion.rk here is a vector (per-batch), so rks becomes a list of tensors and
torch.tensor(rks)will error or yield wrong shape. Use scalars (match bh update) or stack consistently. This is a correctness bug.- for i in range(1, order): + for i in range(1, order): t_prev_i = t_prev_list[-(i + 1)] model_prev_i = model_prev_list[-(i + 1)] lambda_prev_i = ns.marginal_lambda(t_prev_i) - rk = (lambda_prev_i - lambda_prev_0) / h + # Use scalar ratio (consistent across the batch) as in bh update + rk = ((lambda_prev_i - lambda_prev_0) / h)[0] rks.append(rk) D1s.append((model_prev_i - model_prev_0) / rk) - rks.append(1.) - rks = torch.tensor(rks, device=x.device) + rks.append(torch.tensor(1., device=x.device)) + rks = torch.stack([rk if isinstance(rk, torch.Tensor) else torch.tensor(rk, device=x.device) + for rk in rks]).to(x.device)
553-556: Corrector uses wrong A_c row for D1_t coefficient.
A_c[k][-1]uses the last k from the loop (K-2). It should use the last row’s last element.- x_t = x_t - alpha_t * h_phi_ks[K] * (D1_t * A_c[k][-1]) + x_t = x_t - alpha_t * h_phi_ks[K] * (D1_t * A_c[-1][-1])and
- x_t = x_t - sigma_t * h_phi_ks[K] * (D1_t * A_c[k][-1]) + x_t = x_t - sigma_t * h_phi_ks[K] * (D1_t * A_c[-1][-1])Also applies to: 574-576
comfy/app/user_manager.py (2)
167-173: Avoid creating directories on read-only list calls.
get_request_user_filepathdefaults tocreate_dir=True; listing should not create parents.- path = self.get_request_user_filepath(request, directory) + path = self.get_request_user_filepath(request, directory, create_dir=False)
308-321: get_user_data_path creates directories on reads.Tie
create_dirto intent: don’t create on GET/DELETE; do on POST/DEST creation.- def get_user_data_path(request, check_exists = False, param = "file"): + def get_user_data_path(request, check_exists=False, param="file"): file = request.match_info.get(param, None) if not file: return web.Response(status=400) - path = self.get_request_user_filepath(request, file) + # Do not create directories when merely reading/checking existence + path = self.get_request_user_filepath(request, file, create_dir=not check_exists)Then in POST/move destinations, pass
check_exists=Falseso parents are created.
| print("checking out master branch") # noqa: T201 | ||
| branch = repo.lookup_branch('master') | ||
| if branch is None: | ||
| ref = repo.lookup_reference('refs/remotes/origin/master') | ||
| try: | ||
| ref = repo.lookup_reference('refs/remotes/origin/master') | ||
| except: | ||
| print("pulling.") # noqa: T201 | ||
| pull(repo) | ||
| ref = repo.lookup_reference('refs/remotes/origin/master') |
There was a problem hiding this comment.
Handle missing origin/master more robustly; consider main fallback.
Current retry may still fail if origin is absent or branch is named main. Add guarded retries and abort cleanly; also make the status message branch-agnostic.
-print("checking out master branch") # noqa: T201
+print("checking out master/main branch") # noqa: T201
@@
-if branch is None:
- try:
- ref = repo.lookup_reference('refs/remotes/origin/master')
- except:
- print("pulling.") # noqa: T201
- pull(repo)
- ref = repo.lookup_reference('refs/remotes/origin/master')
+if branch is None:
+ try:
+ ref = repo.lookup_reference('refs/remotes/origin/master')
+ except Exception:
+ print("pulling.") # noqa: T201
+ try:
+ pull(repo)
+ ref = repo.lookup_reference('refs/remotes/origin/master')
+ except Exception:
+ print("origin/master not found; trying origin/main") # noqa: T201
+ try:
+ ref = repo.lookup_reference('refs/remotes/origin/main')
+ except Exception:
+ print("No origin/master or origin/main found; aborting.") # noqa: T201
+ sys.exit(1)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In comfy/.ci/update_windows/update.py around lines 63 to 71, the checkout logic
assumes refs/remotes/origin/master exists and only retries once; change it to be
branch-agnostic and robust by: print a generic "checking out default branch"
message, check for local branches 'master' then 'main'; if neither exists, look
for refs/remotes/origin/master then refs/remotes/origin/main; if origin remote
is missing or those refs are absent, attempt a fetch from 'origin' (guarded with
existence check for the remote) and re-check refs; if still missing abort
cleanly with a clear error message and non-zero exit (no exceptions swallowed),
and avoid noisy bare excepts—raise or exit with a helpful message. Ensure
logging reflects the branch-agnostic behavior.
| env: | ||
| WEBHOOK_URL: ${{ secrets.RELEASE_GITHUB_WEBHOOK_URL }} | ||
| WEBHOOK_SECRET: ${{ secrets.RELEASE_GITHUB_WEBHOOK_SECRET }} |
There was a problem hiding this comment.
Validate that required secrets are configured before use.
The workflow references two GitHub secrets (RELEASE_GITHUB_WEBHOOK_URL and RELEASE_GITHUB_WEBHOOK_SECRET) without validating they're set. If either secret is missing or empty, the workflow will proceed with empty values, causing the curl request to fail silently or send to an invalid endpoint.
Consider adding validation before line 16:
- name: Send release webhook
env:
WEBHOOK_URL: ${{ secrets.RELEASE_GITHUB_WEBHOOK_URL }}
WEBHOOK_SECRET: ${{ secrets.RELEASE_GITHUB_WEBHOOK_SECRET }}
run: |
+ if [ -z "$WEBHOOK_URL" ] || [ -z "$WEBHOOK_SECRET" ]; then
+ echo "❌ Error: RELEASE_GITHUB_WEBHOOK_URL and RELEASE_GITHUB_WEBHOOK_SECRET secrets must be configured"
+ exit 1
+ fi
+
# Generate UUID for delivery IDEnsure these secrets are documented in your repository setup guide or workflow documentation.
🤖 Prompt for AI Agents
In comfy/.github/workflows/release-webhook.yml around lines 12-14, the workflow
sets WEBHOOK_URL and WEBHOOK_SECRET from secrets but doesn't validate them; add
a step right before the step that uses them (before line 16) that checks both
env vars are non-empty and fails the job with a clear error if either is missing
(exit non-zero and echo which secret is missing), so the workflow stops rather
than proceeding with empty values; also add a short note in the repo
docs/workflow README to document REQUIRED secrets RELEASE_GITHUB_WEBHOOK_URL and
RELEASE_GITHUB_WEBHOOK_SECRET.
| PAYLOAD=$(cat <<EOF | ||
| { | ||
| "action": "published", | ||
| "release": { | ||
| "id": ${{ github.event.release.id }}, | ||
| "node_id": "${{ github.event.release.node_id }}", | ||
| "url": "${{ github.event.release.url }}", | ||
| "html_url": "${{ github.event.release.html_url }}", | ||
| "assets_url": "${{ github.event.release.assets_url }}", | ||
| "upload_url": "${{ github.event.release.upload_url }}", | ||
| "tag_name": "${{ github.event.release.tag_name }}", | ||
| "target_commitish": "${{ github.event.release.target_commitish }}", | ||
| "name": ${{ toJSON(github.event.release.name) }}, | ||
| "body": ${{ toJSON(github.event.release.body) }}, | ||
| "draft": ${{ github.event.release.draft }}, | ||
| "prerelease": ${{ github.event.release.prerelease }}, | ||
| "created_at": "${{ github.event.release.created_at }}", | ||
| "published_at": "${{ github.event.release.published_at }}", | ||
| "author": { | ||
| "login": "${{ github.event.release.author.login }}", | ||
| "id": ${{ github.event.release.author.id }}, | ||
| "node_id": "${{ github.event.release.author.node_id }}", | ||
| "avatar_url": "${{ github.event.release.author.avatar_url }}", | ||
| "url": "${{ github.event.release.author.url }}", | ||
| "html_url": "${{ github.event.release.author.html_url }}", | ||
| "type": "${{ github.event.release.author.type }}", | ||
| "site_admin": ${{ github.event.release.author.site_admin }} | ||
| }, | ||
| "tarball_url": "${{ github.event.release.tarball_url }}", | ||
| "zipball_url": "${{ github.event.release.zipball_url }}", | ||
| "assets": ${{ toJSON(github.event.release.assets) }} | ||
| }, | ||
| "repository": { | ||
| "id": ${{ github.event.repository.id }}, | ||
| "node_id": "${{ github.event.repository.node_id }}", | ||
| "name": "${{ github.event.repository.name }}", | ||
| "full_name": "${{ github.event.repository.full_name }}", | ||
| "private": ${{ github.event.repository.private }}, | ||
| "owner": { | ||
| "login": "${{ github.event.repository.owner.login }}", | ||
| "id": ${{ github.event.repository.owner.id }}, | ||
| "node_id": "${{ github.event.repository.owner.node_id }}", | ||
| "avatar_url": "${{ github.event.repository.owner.avatar_url }}", | ||
| "url": "${{ github.event.repository.owner.url }}", | ||
| "html_url": "${{ github.event.repository.owner.html_url }}", | ||
| "type": "${{ github.event.repository.owner.type }}", | ||
| "site_admin": ${{ github.event.repository.owner.site_admin }} | ||
| }, | ||
| "html_url": "${{ github.event.repository.html_url }}", | ||
| "clone_url": "${{ github.event.repository.clone_url }}", | ||
| "git_url": "${{ github.event.repository.git_url }}", | ||
| "ssh_url": "${{ github.event.repository.ssh_url }}", | ||
| "url": "${{ github.event.repository.url }}", | ||
| "created_at": "${{ github.event.repository.created_at }}", | ||
| "updated_at": "${{ github.event.repository.updated_at }}", | ||
| "pushed_at": "${{ github.event.repository.pushed_at }}", | ||
| "default_branch": "${{ github.event.repository.default_branch }}", | ||
| "fork": ${{ github.event.repository.fork }} | ||
| }, | ||
| "sender": { | ||
| "login": "${{ github.event.sender.login }}", | ||
| "id": ${{ github.event.sender.id }}, | ||
| "node_id": "${{ github.event.sender.node_id }}", | ||
| "avatar_url": "${{ github.event.sender.avatar_url }}", | ||
| "url": "${{ github.event.sender.url }}", | ||
| "html_url": "${{ github.event.sender.html_url }}", | ||
| "type": "${{ github.event.sender.type }}", | ||
| "site_admin": ${{ github.event.sender.site_admin }} | ||
| } | ||
| } | ||
| EOF | ||
| ) |
There was a problem hiding this comment.
JSON payload construction is fragile; missing/null GitHub fields can break syntax.
The heredoc uses direct GitHub context substitution for many fields without the toJSON() filter (e.g., lines 26, 27, 28). If any of these fields are null or contain special characters (quotes, newlines), the resulting JSON will be invalid. While toJSON() is correctly used for name, body, and assets (lines 33-34, 51), the inconsistency creates a maintenance risk.
Apply toJSON() to all string fields that could contain user input or special characters:
"release": {
"id": ${{ github.event.release.id }},
- "node_id": "${{ github.event.release.node_id }}",
- "url": "${{ github.event.release.url }}",
- "html_url": "${{ github.event.release.html_url }}",
- "assets_url": "${{ github.event.release.assets_url }}",
- "upload_url": "${{ github.event.release.upload_url }}",
+ "node_id": ${{ toJSON(github.event.release.node_id) }},
+ "url": ${{ toJSON(github.event.release.url) }},
+ "html_url": ${{ toJSON(github.event.release.html_url) }},
+ "assets_url": ${{ toJSON(github.event.release.assets_url) }},
+ "upload_url": ${{ toJSON(github.event.release.upload_url) }},Apply this consistently across all string fields in the release, repository, and sender objects.
🤖 Prompt for AI Agents
In comfy/.github/workflows/release-webhook.yml around lines 21 to 92 the heredoc
builds a JSON payload by interpolating many GitHub context fields without
escaping, which will break if values are null or contain quotes/newlines; update
the template to wrap all interpolated string fields in the release,
release.author, repository, repository.owner, and sender objects with the
toJSON() filter (e.g. replace occurrences like ${{ github.event.release.tag_name
}} and other string fields with ${{ toJSON(github.event.release.tag_name) }}),
leaving numeric/boolean fields unchanged and keeping assets/name/body already
using toJSON().
| SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2) | ||
|
|
||
| # Send webhook with required headers | ||
| curl -X POST "$WEBHOOK_URL" \ | ||
| -H "Content-Type: application/json" \ | ||
| -H "X-GitHub-Event: release" \ | ||
| -H "X-GitHub-Delivery: $DELIVERY_ID" \ | ||
| -H "X-GitHub-Hook-ID: $HOOK_ID" \ | ||
| -H "X-Hub-Signature-256: sha256=$SIGNATURE" \ | ||
| -H "User-Agent: GitHub-Actions-Webhook/1.0" \ | ||
| -d "$PAYLOAD" \ | ||
| --fail --silent --show-error |
There was a problem hiding this comment.
Add error handling and validation; curl --fail alone is insufficient.
The curl command uses --fail --silent --show-error, which will exit non-zero on HTTP errors but provides minimal visibility. If the webhook secret is incorrect, the receiving endpoint rejects the signature, or the JSON is malformed, the script will fail without clear diagnostics. Additionally, there's no retry logic, JSON validation, or timeout safeguards for webhook delivery reliability.
Enhance error handling:
# Generate HMAC-SHA256 signature
SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2)
+
+ # Validate JSON payload before sending
+ if ! echo "$PAYLOAD" | jq empty 2>/dev/null; then
+ echo "❌ Error: Generated payload is not valid JSON"
+ echo "$PAYLOAD"
+ exit 1
+ fi
# Send webhook with required headers
- curl -X POST "$WEBHOOK_URL" \
+ HTTP_CODE=$(curl -w "%{http_code}" -X POST "$WEBHOOK_URL" \
-H "Content-Type: application/json" \
-H "X-GitHub-Event: release" \
-H "X-GitHub-Delivery: $DELIVERY_ID" \
-H "X-GitHub-Hook-ID: $HOOK_ID" \
-H "X-Hub-Signature-256: sha256=$SIGNATURE" \
-H "User-Agent: GitHub-Actions-Webhook/1.0" \
-d "$PAYLOAD" \
- --fail --silent --show-error
+ --silent --show-error --connect-timeout 10 --max-time 30 -o /tmp/webhook_response.txt)
+
+ if [ "$HTTP_CODE" != "200" ] && [ "$HTTP_CODE" != "201" ] && [ "$HTTP_CODE" != "204" ]; then
+ echo "❌ Error: Webhook delivery failed with HTTP $HTTP_CODE"
+ cat /tmp/webhook_response.txt
+ exit 1
+ fi
echo "✅ Release webhook sent successfully"This adds JSON validation, HTTP status checking, timeouts, and response logging for debugging.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2) | |
| # Send webhook with required headers | |
| curl -X POST "$WEBHOOK_URL" \ | |
| -H "Content-Type: application/json" \ | |
| -H "X-GitHub-Event: release" \ | |
| -H "X-GitHub-Delivery: $DELIVERY_ID" \ | |
| -H "X-GitHub-Hook-ID: $HOOK_ID" \ | |
| -H "X-Hub-Signature-256: sha256=$SIGNATURE" \ | |
| -H "User-Agent: GitHub-Actions-Webhook/1.0" \ | |
| -d "$PAYLOAD" \ | |
| --fail --silent --show-error | |
| SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2) | |
| # Validate JSON payload before sending | |
| if ! echo "$PAYLOAD" | jq empty 2>/dev/null; then | |
| echo "❌ Error: Generated payload is not valid JSON" | |
| echo "$PAYLOAD" | |
| exit 1 | |
| fi | |
| # Send webhook with required headers | |
| HTTP_CODE=$(curl -w "%{http_code}" -X POST "$WEBHOOK_URL" \ | |
| -H "Content-Type: application/json" \ | |
| -H "X-GitHub-Event: release" \ | |
| -H "X-GitHub-Delivery: $DELIVERY_ID" \ | |
| -H "X-GitHub-Hook-ID: $HOOK_ID" \ | |
| -H "X-Hub-Signature-256: sha256=$SIGNATURE" \ | |
| -H "User-Agent: GitHub-Actions-Webhook/1.0" \ | |
| -d "$PAYLOAD" \ | |
| --silent --show-error --connect-timeout 10 --max-time 30 -o /tmp/webhook_response.txt) | |
| if [ "$HTTP_CODE" != "200" ] && [ "$HTTP_CODE" != "201" ] && [ "$HTTP_CODE" != "204" ]; then | |
| echo "❌ Error: Webhook delivery failed with HTTP $HTTP_CODE" | |
| cat /tmp/webhook_response.txt | |
| exit 1 | |
| fi | |
| echo "✅ Release webhook sent successfully" |
| - name: Set up Python | ||
| uses: actions/setup-python@v2 | ||
| with: | ||
| python-version: 3.x |
There was a problem hiding this comment.
Update action version and pin Python version for reproducibility.
The ruff job uses an outdated action version (actions/setup-python@v2) and a vague Python version specifier (3.x). This creates non-determinism and inconsistency compared to the pylint job, which uses v4 and pins 3.12.
Ruff checks are typically version-agnostic, but pinning to a specific version (e.g., 3.12) ensures reproducible CI runs. Additionally, upgrading to v4 aligns with the pylint job and improves security/reliability.
Apply this diff to update and pin the Python version:
- name: Set up Python
- uses: actions/setup-python@v2
+ uses: actions/setup-python@v4
with:
- python-version: 3.x
+ python-version: '3.12'🤖 Prompt for AI Agents
In comfy/.github/workflows/ruff.yml around lines 14 to 17, the workflow uses the
outdated actions/setup-python@v2 and a non-pinned python-version: 3.x; update
the action to actions/setup-python@v4 and pin python-version to a specific
patchless version (e.g., 3.12) to match the pylint job for reproducible CI;
modify the with block to use python-version: 3.12 and ensure any related steps
remain compatible with that Python version.
| transformer_options = kwargs.get("transformer_options", {}) | ||
| patches_replace = transformer_options.get("patches_replace", {}) | ||
| batch, seq, device = *x.shape[:2], x.device | ||
| context = kwargs["context"] | ||
|
|
There was a problem hiding this comment.
Safer context access to prevent KeyError.
kwargs["context"] crashes if missing. Prefer .get.
Apply:
- context = kwargs["context"]
+ context = kwargs.get("context", None)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| transformer_options = kwargs.get("transformer_options", {}) | |
| patches_replace = transformer_options.get("patches_replace", {}) | |
| batch, seq, device = *x.shape[:2], x.device | |
| context = kwargs["context"] | |
| transformer_options = kwargs.get("transformer_options", {}) | |
| patches_replace = transformer_options.get("patches_replace", {}) | |
| batch, seq, device = *x.shape[:2], x.device | |
| context = kwargs.get("context", None) | |
🤖 Prompt for AI Agents
In comfy/comfy/ldm/audio/dit.py around lines 611 to 615, replace the direct
kwargs["context"] access with a safer kwargs.get("context") call and handle the
missing case appropriately (e.g., default to None or an empty tensor/list, or
raise a clear error message) so the code no longer raises a KeyError when
"context" is absent.
| def forward(self, q, k, v, transformer_options={}): | ||
| q = self.to_q(q) | ||
| k = self.to_k(k) | ||
| v = self.to_v(v) | ||
|
|
||
| out = optimized_attention(q, k, v, self.heads) | ||
| out = optimized_attention(q, k, v, self.heads, transformer_options=transformer_options) | ||
|
|
||
| return self.out_proj(out) |
There was a problem hiding this comment.
Fix mutable default argument anti-pattern.
Using a mutable default argument transformer_options={} is a Python anti-pattern that can lead to unexpected behavior when the dict is mutated. If the same dict instance is shared across calls, modifications in one call will affect subsequent calls.
Apply this diff:
- def forward(self, q, k, v, transformer_options={}):
+ def forward(self, q, k, v, transformer_options=None):
+ if transformer_options is None:
+ transformer_options = {}
q = self.to_q(q)
k = self.to_k(k)
v = self.to_v(v)Apply the same fix to Attention2D.forward (line 50) and AttnBlock.forward (line 117).
🤖 Prompt for AI Agents
In comfy/comfy/ldm/cascade/common.py around lines 35-42 (and also update
Attention2D.forward at ~line 50 and AttnBlock.forward at ~line 117), replace the
mutable default argument transformer_options={} with transformer_options=None in
each method signature, then inside each method add a defensive assignment like
"if transformer_options is None: transformer_options = {}", and keep the rest of
the logic unchanged; ensure imports/types unaffected and run tests to verify no
behavioral changes.
| ### [Installing ComfyUI](#installing) | ||
| ## Get Started | ||
|
|
||
| #### [Desktop Application](https://www.comfy.org/download) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Fix heading level hierarchy.
The h4 heading at this line should be h3 to maintain proper heading hierarchy (MD001). Update #### [Desktop Application] to ### [Desktop Application].
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
41-41: Heading levels should only increment by one level at a time
Expected: h3; Actual: h4
(MD001, heading-increment)
🤖 Prompt for AI Agents
In comfy/README.md around line 41, the heading level is incorrect: change the h4
"#### [Desktop Application](https://www.comfy.org/download)" to an h3 "###
[Desktop Application](https://www.comfy.org/download)" to restore proper heading
hierarchy (MD001).
| - SD1.x, SD2.x ([unCLIP](https://comfyanonymous.github.io/ComfyUI_examples/unclip/)) | ||
| - [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/) | ||
| - [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/) | ||
| - [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) | ||
| - Pixart Alpha and Sigma | ||
| - [AuraFlow](https://comfyanonymous.github.io/ComfyUI_examples/aura_flow/) | ||
| - [HunyuanDiT](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_dit/) | ||
| - [Flux](https://comfyanonymous.github.io/ComfyUI_examples/flux/) | ||
| - [Lumina Image 2.0](https://comfyanonymous.github.io/ComfyUI_examples/lumina2/) | ||
| - [HiDream](https://comfyanonymous.github.io/ComfyUI_examples/hidream/) | ||
| - [Qwen Image](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/) | ||
| - [Hunyuan Image 2.1](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_image/) | ||
| - Image Editing Models | ||
| - [Omnigen 2](https://comfyanonymous.github.io/ComfyUI_examples/omnigen/) | ||
| - [Flux Kontext](https://comfyanonymous.github.io/ComfyUI_examples/flux/#flux-kontext-image-editing-model) | ||
| - [HiDream E1.1](https://comfyanonymous.github.io/ComfyUI_examples/hidream/#hidream-e11) | ||
| - [Qwen Image Edit](https://comfyanonymous.github.io/ComfyUI_examples/qwen_image/#edit-model) | ||
| - Video Models | ||
| - [Stable Video Diffusion](https://comfyanonymous.github.io/ComfyUI_examples/video/) | ||
| - [Mochi](https://comfyanonymous.github.io/ComfyUI_examples/mochi/) | ||
| - [LTX-Video](https://comfyanonymous.github.io/ComfyUI_examples/ltxv/) | ||
| - [Hunyuan Video](https://comfyanonymous.github.io/ComfyUI_examples/hunyuan_video/) | ||
| - [Wan 2.1](https://comfyanonymous.github.io/ComfyUI_examples/wan/) | ||
| - [Wan 2.2](https://comfyanonymous.github.io/ComfyUI_examples/wan22/) | ||
| - Audio Models | ||
| - [Stable Audio](https://comfyanonymous.github.io/ComfyUI_examples/audio/) | ||
| - [ACE Step](https://comfyanonymous.github.io/ComfyUI_examples/audio/) | ||
| - 3D Models | ||
| - [Hunyuan3D 2.0](https://docs.comfy.org/tutorials/3d/hunyuan3D-2) |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Standardize list indentation.
Multiple list items under the "Features" section use 3-space indentation instead of the standard 2-space indentation (MD007). Apply consistent 2-space indentation across all nested bullet points in this section.
🧰 Tools
🪛 LanguageTool
[grammar] ~58-~58: There might be a mistake here.
Context: ...ous.github.io/ComfyUI_examples/unclip/)) - [SDXL](https://comfyanonymous.github.io/C...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ....io/ComfyUI_examples/sdxl/), SDXL Turbo - [Stable Cascade](https://comfyanonymous.g...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ..._examples/sdturbo/) - Stable Cascade - [SD3 and SD3.5](https://comfyanonymous.gi...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...les/stable_cascade/) - SD3 and SD3.5 - Pixart Alpha and Sigma - [AuraFlow](h...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...amples/sd3/) - Pixart Alpha and Sigma - [AuraFlow](https://comfyanonymous.github....
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ... - Pixart Alpha and Sigma - AuraFlow - [HunyuanDiT](https://comfyanonymous.githu...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...UI_examples/aura_flow/) - HunyuanDiT - [Flux](https://comfyanonymous.github.io/C...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...omfyUI_examples/hunyuan_dit/) - Flux - [Lumina Image 2.0](https://comfyanonymous...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...I_examples/flux/) - Lumina Image 2.0 - [HiDream](https://comfyanonymous.github.i...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ...ComfyUI_examples/lumina2/) - HiDream - [Qwen Image](https://comfyanonymous.githu...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...fyUI_examples/hidream/) - Qwen Image - [Hunyuan Image 2.1](https://comfyanonymou...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...les/qwen_image/) - Hunyuan Image 2.1 - Image Editing Models - [Omnigen 2](ht...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...s/hunyuan_image/) - Image Editing Models - [Omnigen 2](https://comfyanonymous.github...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...) - Image Editing Models - Omnigen 2 - [Flux Kontext](https://comfyanonymous.git...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...UI_examples/omnigen/) - Flux Kontext - [HiDream E1.1](https://comfyanonymous.git...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...-image-editing-model) - HiDream E1.1 - [Qwen Image Edit](https://comfyanonymous....
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...ream/#hidream-e11) - Qwen Image Edit - Video Models - [Stable Video Diffusio...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...s/qwen_image/#edit-model) - Video Models - [Stable Video Diffusion](https://comfyano...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...ideo Models - Stable Video Diffusion - [Mochi](https://comfyanonymous.github.io/...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ....io/ComfyUI_examples/video/) - Mochi - [LTX-Video](https://comfyanonymous.github...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...ComfyUI_examples/mochi/) - LTX-Video - [Hunyuan Video](https://comfyanonymous.gi...
(QB_NEW_EN)
[grammar] ~79-~79: There might be a mistake here.
Context: ...fyUI_examples/ltxv/) - Hunyuan Video - [Wan 2.1](https://comfyanonymous.github.i...
(QB_NEW_EN)
[grammar] ~80-~80: There might be a mistake here.
Context: ...I_examples/hunyuan_video/) - Wan 2.1 - [Wan 2.2](https://comfyanonymous.github.i...
(QB_NEW_EN)
[grammar] ~81-~81: There might be a mistake here.
Context: ....io/ComfyUI_examples/wan/) - Wan 2.2 - Audio Models - [Stable Audio](https:/...
(QB_NEW_EN)
[grammar] ~82-~82: There might be a mistake here.
Context: .../ComfyUI_examples/wan22/) - Audio Models - [Stable Audio](https://comfyanonymous.git...
(QB_NEW_EN)
[grammar] ~83-~83: There might be a mistake here.
Context: ...an22/) - Audio Models - Stable Audio - [ACE Step](https://comfyanonymous.github....
(QB_NEW_EN)
[grammar] ~84-~84: There might be a mistake here.
Context: .../ComfyUI_examples/audio/) - ACE Step - 3D Models - [Hunyuan3D 2.0](https://d...
(QB_NEW_EN)
[grammar] ~85-~85: There might be a mistake here.
Context: ....io/ComfyUI_examples/audio/) - 3D Models - [Hunyuan3D 2.0](https://docs.comfy.org/tu...
(QB_NEW_EN)
[grammar] ~86-~86: There might be a mistake here.
Context: .../audio/) - 3D Models - Hunyuan3D 2.0 - Asynchronous Queue system - Many optimiz...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.18.1)
58-58: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
59-59: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
60-60: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
61-61: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
62-62: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
63-63: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
64-64: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
65-65: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
66-66: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
67-67: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
68-68: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
69-69: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
71-71: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
72-72: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
73-73: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
74-74: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
76-76: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
77-77: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
78-78: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
79-79: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
80-80: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
81-81: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
83-83: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
84-84: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
86-86: Unordered list indentation
Expected: 2; Actual: 3
(MD007, ul-indent)
🤖 Prompt for AI Agents
In comfy/README.md around lines 58 to 86, several nested bullet points use
3-space indentation instead of the repository's standard 2-space indentation
(MD007); update all nested list items in this block (Image Editing Models, Video
Models, Audio Models, etc.) to use consistent 2-space indentation for each
nesting level so the markdown list alignment is uniform across the section.
| | `Ctrl` + `S` | Save workflow | | ||
| | `Ctrl` + `O` | Load workflow | | ||
| | `Ctrl` + `A` | Select all nodes | | ||
| | `Alt `+ `C` | Collapse/uncollapse selected nodes | |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Remove spaces inside code span.
Line 140 contains a code span with internal spaces (MD038): `Alt `+ `C` should be `Alt + C`.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
140-140: Spaces inside code span elements
(MD038, no-space-in-code)
🤖 Prompt for AI Agents
In comfy/README.md around line 140, the Markdown code span contains unnecessary
internal spaces causing style guideline MD038; replace the current `` `Alt `+
`C` `` with a single code span that includes the full key combo (e.g., `` `Alt +
C` ``) or adjust to two separate, correctly spaced code spans (e.g., `` `Alt` +
`C` ``) so there are no stray spaces inside a code span.
There was a problem hiding this comment.
Actionable comments posted: 99
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (16)
comfy/comfy/gligen.py (1)
90-91: Addheads=n_headsparameter to CrossAttention instantiation at line 91.The
CrossAttentioncall at line 91 omits theheadsparameter, causing it to default to 8 regardless of then_headsvalue passed to the constructor. This is inconsistent with lines 13–18 and 50–55, where bothGatedCrossAttentionDenseandGatedSelfAttentionDenseexplicitly passheads=n_heads. Update line 91 to includeheads=n_headsto match the pattern and ensure the correct number of attention heads is used.comfy/app/app_settings.py (1)
30-34: Add error handling tosave_settingsfor consistency.The
save_settingsmethod lacks error handling, creating an inconsistency withget_settings:
get_request_user_filepathcan raiseKeyError(as handled inget_settings), but here it would propagate uncaught.- File write operations can fail due to permissions, disk space, or I/O errors, with no handling.
Apply this diff to add consistent error handling:
def save_settings(self, request, settings): - file = self.user_manager.get_request_user_filepath( - request, "comfy.settings.json") - with open(file, "w") as f: - f.write(json.dumps(settings, indent=4)) + try: + file = self.user_manager.get_request_user_filepath( + request, "comfy.settings.json") + except KeyError as e: + logging.error("User settings file path not found.") + raise web.HTTPUnauthorized() from e + + try: + with open(file, "w") as f: + f.write(json.dumps(settings, indent=4)) + except OSError as e: + logging.error(f"Failed to save user settings: {e}") + raise web.HTTPInternalServerError() from ecomfy/comfy/cldm/mmdit.py (3)
45-47: Potential None dereference: torch.zeros_like(y) when y can be None.y defaults to None; zeros_like(None) will raise. Guard it.
- #weird sd3 controlnet specific stuff - y = torch.zeros_like(y) + # SD3 controlnet: ignore class labels by zeroing when provided + if y is not None: + y = torch.zeros_like(y)
36-43: Fix type hints: wrong return type and missing hint annotation.The
forwardmethod at lines 36–43 returns a dict but is annotated astorch.Tensor; thehintparameter also lacks a type. Both must be corrected:def forward( self, x: torch.Tensor, timesteps: torch.Tensor, y: Optional[torch.Tensor] = None, context: Optional[torch.Tensor] = None, - hint = None, - ) -> torch.Tensor: + hint: Optional[torch.Tensor] = None, + ) -> dict[str, list[torch.Tensor]]:Python 3.9+ is already required (per
pyproject.toml), so thedict[...]syntax is valid without additional imports.
53-54: Add None validation for hint parameter before passing to pos_embed_input.The review comment is correct. The
hintparameter defaults toNone(line 42) and can remainNonebased on initialization and conditional logic in the calling code. However,pos_embed_inputwrapsPatchEmbed, which performs tensor operations (pad_to_patch_size,proj,flatten,transpose) that will crash if passedNone.The suggested fix in the review is appropriate. Either validate with a clear error or provide a fallback zero tensor if
Noneis a valid state in your pipeline.comfy/app/user_manager.py (2)
68-99: Harden path checks against symlink traversal (use realpath).abspath + commonpath can be bypassed via symlinks inside user dirs. Resolve with realpath on both the user root and candidate path, and compare those. Also avoid creating parents before validation.
Apply:
- user = self.get_request_user_id(request) - path = user_root = os.path.abspath(os.path.join(root_dir, user)) + user = self.get_request_user_id(request) + # Resolve symlinks to prevent escape + path = user_root = os.path.realpath(os.path.join(root_dir, user)) @@ - if os.path.commonpath((root_dir, user_root)) != root_dir: + if os.path.commonpath((os.path.realpath(root_dir), user_root)) != os.path.realpath(root_dir): return None @@ - path = os.path.abspath(os.path.join(user_root, file)) - if os.path.commonpath((user_root, path)) != user_root: + path = os.path.realpath(os.path.join(user_root, file)) + if os.path.commonpath((user_root, path)) != user_root: return None
426-429: Bug: wrong isinstance check; should validate dest, not source.The guard repeats source; invalid dest returns is not handled.
- dest = get_user_data_path(request, check_exists=False, param="dest") - if not isinstance(source, str): - return dest + dest = get_user_data_path(request, check_exists=False, param="dest") + if not isinstance(dest, str): + return destcomfy/comfy/extra_samplers/uni_pc.py (4)
347-349: Assertion incorrectly excludes supportedmodel_type="score"
noise_pred_fnhandles"score", but this assert blocks it.- assert model_type in ["noise", "x_start", "v"] + assert model_type in ["noise", "x_start", "v", "score"]
493-522: Fix shape math in vary_coeff path: batchedrks/Cinversion and broadcasting bugs
rkis a batch vector;Cbecomes non-square andlinalg.invwill fail.- Several multiplications use 1D tensors (B,) against (B,C,H,W) without
expand_dims, causing broadcasting errors.def multistep_uni_pc_vary_update(self, x, model_prev_list, t_prev_list, t, order, use_corrector=True): - logging.info(f'using unified predictor-corrector with order {order} (solver type: vary coeff)') + logging.info(f'using unified predictor-corrector with order {order} (solver type: vary coeff)') ns = self.noise_schedule assert order <= len(model_prev_list) @@ - for i in range(1, order): + for i in range(1, order): t_prev_i = t_prev_list[-(i + 1)] model_prev_i = model_prev_list[-(i + 1)] lambda_prev_i = ns.marginal_lambda(t_prev_i) - rk = (lambda_prev_i - lambda_prev_0) / h + # Use scalar rk to keep Vandermonde system 1D (match bh path semantics) + rk = ((lambda_prev_i - lambda_prev_0) / h)[0] rks.append(rk) D1s.append((model_prev_i - model_prev_0) / rk) @@ - rks.append(1.) - rks = torch.tensor(rks, device=x.device) + rks.append(1.) + rks = torch.tensor(rks, device=x.device) # shape: (K,) @@ - if len(D1s) > 0: - D1s = torch.stack(D1s, dim=1) # (B, K) + if len(D1s) > 0: + D1s = torch.stack(D1s, dim=1) # (B, K-1, C, H, W) @@ - hh = -h if self.predict_x0 else h + hh = -h if self.predict_x0 else h h_phi_1 = torch.expm1(hh) @@ - if self.predict_x0: + if self.predict_x0: + dims = x.dim() + alpha_t_e = expand_dims(alpha_t, dims) + sigma_t_e = expand_dims(sigma_t, dims) + sigma_prev_0_e = expand_dims(sigma_prev_0, dims) + h_phi_1_e = expand_dims(h_phi_1, dims) + h_phi_ks_e = [expand_dims(v, dims) for v in h_phi_ks] x_t_ = ( - sigma_t / sigma_prev_0 * x - - alpha_t * h_phi_1 * model_prev_0 + (sigma_t_e / sigma_prev_0_e) * x + - alpha_t_e * h_phi_1_e * model_prev_0 ) @@ - for k in range(K - 1): - x_t = x_t - alpha_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k]) + for k in range(K - 1): + x_t = x_t - alpha_t_e * h_phi_ks_e[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k]) @@ - for k in range(K - 1): - x_t = x_t - alpha_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1]) - x_t = x_t - alpha_t * h_phi_ks[K] * (D1_t * A_c[k][-1]) + for k in range(K - 1): + x_t = x_t - alpha_t_e * h_phi_ks_e[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1]) + x_t = x_t - alpha_t_e * h_phi_ks_e[K] * (D1_t * A_c[k][-1]) else: log_alpha_prev_0, log_alpha_t = ns.marginal_log_mean_coeff(t_prev_0), ns.marginal_log_mean_coeff(t) + dims = x.dim() + sigma_t_e = expand_dims(sigma_t, dims) + h_phi_1_e = expand_dims(h_phi_1, dims) + h_phi_ks_e = [expand_dims(v, dims) for v in h_phi_ks] x_t_ = ( - (torch.exp(log_alpha_t - log_alpha_prev_0)) * x - - (sigma_t * h_phi_1) * model_prev_0 + expand_dims(torch.exp(log_alpha_t - log_alpha_prev_0), dims) * x + - (sigma_t_e * h_phi_1_e) * model_prev_0 ) @@ - for k in range(K - 1): - x_t = x_t - sigma_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k]) + for k in range(K - 1): + x_t = x_t - sigma_t_e * h_phi_ks_e[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_p[k]) @@ - for k in range(K - 1): - x_t = x_t - sigma_t * h_phi_ks[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1]) - x_t = x_t - sigma_t * h_phi_ks[K] * (D1_t * A_c[k][-1]) + for k in range(K - 1): + x_t = x_t - sigma_t_e * h_phi_ks_e[k + 1] * torch.einsum('bkchw,k->bchw', D1s, A_c[k][:-1]) + x_t = x_t - sigma_t_e * h_phi_ks_e[K] * (D1_t * A_c[k][-1])Also applies to: 535-556, 557-576
373-382: Undefined attributes indynamic_thresholding_fn
self.dynamic_thresholding_ratioandself.thresholding_max_valare not defined anywhere; likely meant to use the hardcodedp=0.995andself.max_val(as indata_prediction_fn).- def dynamic_thresholding_fn(self, x0, t=None): + def dynamic_thresholding_fn(self, x0, t=None, ratio: float = 0.995): @@ - p = self.dynamic_thresholding_ratio + p = ratio @@ - s = expand_dims(torch.maximum(s, self.thresholding_max_val * torch.ones_like(s).to(s.device)), dims) + s = expand_dims(torch.maximum(s, self.max_val * torch.ones_like(s).to(s.device)), dims)
847-853: Avoid in-place view when tweaking last timestep
sigmas[:]returns a view; mutating it alterssigmas. Clone first.- timesteps = sigmas.clone() - if sigmas[-1] == 0: - timesteps = sigmas[:] - timesteps[-1] = 0.001 - else: - timesteps = sigmas.clone() + timesteps = sigmas.clone() + if timesteps[-1] == 0: + timesteps[-1] = 0.001comfy/comfy/ldm/aura/mmdit.py (1)
88-101: Fix mutable default dicts fortransformer_optionsacross six forward methods.The mutable default antipattern (
transformer_options={}) is confirmed in all six forward methods. External code in the codebase mutatestransformer_options(e.g.,modules/patch.py,comfy/samplers.py), creating a real risk of cross-call state leakage if methods are invoked without arguments. Refactoring toNonedefaults with normalization is the correct fix.All six methods verified:
- Line 88:
SingleAttention.forward()- Line 147:
DoubleAttention.forward()- Line 210:
MMDiTBlock.forward()- Line 258:
DiTBlock.forward()- Line 439:
MMDiT.forward()- Line 446:
MMDiT._forward()The suggested diff is accurate and safe—positional calls remain compatible and the refactoring prevents state bleed through shared defaults.
comfy/comfy/ldm/cascade/stage_a.py (2)
228-231: Blocker: forward() unpacks encode() result when quantize=False (runtime crash).encode() returns a tensor when quantize=False, but forward() always unpacks a tuple. Fix by branching on quantize.
- def forward(self, x, quantize=False): - qe, x, _, vq_loss = self.encode(x, quantize) - x = self.decode(qe) - return x, vq_loss + def forward(self, x, quantize=False): + if quantize: + qe, latent, _, vq_loss = self.encode(x, quantize=True) + x = self.decode(qe) + return x, vq_loss + else: + latent = self.encode(x, quantize=False) + x = self.decode(latent) + return x, None
157-162: Narrow the exception; avoid blanket except.Catching all exceptions hides real bugs. Limit to RuntimeError for bf16 fallback.
- try: - x = x + self.depthwise(x_temp) * mods[2] - except: #operation not implemented for bf16 + try: + x = x + self.depthwise(x_temp) * mods[2] + except RuntimeError: x_temp = self.depthwise[0](x_temp.float()).to(x.dtype) x = x + self.depthwise[1](x_temp) * mods[2]comfy/comfy/ldm/audio/dit.py (2)
294-303: Fix mutable default argument: transformer_options={}.Use None + local dict to avoid shared state.
- causal = None, - transformer_options={}, + causal = None, + transformer_options=None, @@ - transformer_options={} + transformer_options=None,And at the top of each method:
+ if transformer_options is None: + transformer_options = {}Also applies to: 485-494
172-176: Blocker: undefinedseq_lenin RotaryEmbedding.forward (crash when use_xpos=True).Define seq_len from t before computing power.
- power = (torch.arange(seq_len, device = device) - (seq_len // 2)) / self.scale_base # noqa: F821 seq_len is not defined + seq_len = t.shape[-1] + power = (torch.arange(seq_len, device=device) - (seq_len // 2)) / self.scale_base
♻️ Duplicate comments (44)
comfy/.gitignore (2)
4-5: [DUPLICATE] Gitignore negation blocked by directory-level ignore (lines 4–5).The pattern
/input/(line 4) prevents git from traversing into the directory, so the negation!/input/example.png(line 5) cannot take effect. This issue was flagged in a previous review cycle.To fix, replace the directory-level ignore with a contents-only pattern:
-/input/ +/input/* !/input/example.pngAlternatively, explicitly unignore the directory before the file negation:
/input/ +!/input/ !/input/example.png
8-9: [DUPLICATE] Gitignore negation blocked by directory-level ignore (lines 8–9).The pattern
/custom_nodes/(line 8) prevents git from traversing into the directory, so the negation!custom_nodes/example_node.py.example(line 9) cannot take effect. This issue was flagged in a previous review cycle.To fix, replace the directory-level ignore with a contents-only pattern:
-/custom_nodes/ +/custom_nodes/* !custom_nodes/example_node.py.examplecomfy/.github/workflows/stale-issues.yml (1)
4-5: Cron comment vs. UTC schedule mismatch (duplicate issue from prior review).The comment "Run daily at 430 am PT" is ambiguous and doesn't match the cron expression in a UTC-aware manner. GitHub Actions cron runs in UTC;
30 11 * * *executes at 11:30 UTC, which is 4:30 AM PDT (summer) but 3:30 AM PST (winter). Update the comment to clarify UTC and account for DST:- # Run daily at 430 am PT + # Runs daily at 11:30 UTC (4:30am PDT / 3:30am PST)comfy/.github/workflows/update-version.yml (1)
35-48: Add error handling for missing or malformed pyproject.toml.This issue was flagged in a prior review but remains unaddressed. The Python script assumes
pyproject.tomlexists and contains the expected structure. If the file is missing, malformed, or the version key is absent, the script will crash with an unclear traceback in the CI logs.Apply this diff to add robust error handling:
- name: Update comfyui_version.py run: | # Read version from pyproject.toml and update comfyui_version.py python -c ' import tomllib + import sys # Read version from pyproject.toml + try: with open("pyproject.toml", "rb") as f: config = tomllib.load(f) version = config["project"]["version"] + except FileNotFoundError: + print("Error: pyproject.toml not found", file=sys.stderr) + sys.exit(1) + except (tomllib.TOMLDecodeError, KeyError) as e: + print(f"Error: Failed to extract version from pyproject.toml: {e}", file=sys.stderr) + sys.exit(1) # Write version to comfyui_version.py + try: with open("comfyui_version.py", "w") as f: f.write("# This file is automatically generated by the build process when version is\n") f.write("# updated in pyproject.toml.\n") f.write(f"__version__ = \"{version}\"\n") + print(f"Successfully updated comfyui_version.py to version {version}") + except IOError as e: + print(f"Error: Failed to write comfyui_version.py: {e}", file=sys.stderr) + sys.exit(1) 'comfy/comfy/cldm/dit_embedder.py (6)
2-2: Import Dict for the forward method return type.The forward method returns a dictionary but
Dictis not imported fromtyping.Apply this diff to add the missing import:
-from typing import List, Optional, Tuple +from typing import Dict, List, Optional, Tuple
72-76: Guard against operations=None when constructing controlnet_blocks.Line 74 directly accesses
operations.Linear, which will crash with anAttributeErrorifoperationsisNone. All other embedders receiveoperationsas a parameter and handle it internally, but this direct attribute access is unprotected.Apply this diff to add a guard:
self.controlnet_blocks = nn.ModuleList([]) +linear_cls = operations.Linear if operations is not None else nn.Linear for _ in range(len(self.transformer_blocks)): - controlnet_block = operations.Linear(self.hidden_size, self.hidden_size, dtype=dtype, device=device) + controlnet_block = linear_cls(self.hidden_size, self.hidden_size, dtype=dtype, device=device) self.controlnet_blocks.append(controlnet_block)
88-96: Fix forward method return type annotation.The return type is annotated as
Tuple[Tensor, List[Tensor]], but line 120 actually returns{"output": block_out}, which is a dictionary with a string key mapping to a tuple.Apply this diff to fix the annotation:
- ) -> Tuple[Tensor, List[Tensor]]: + ) -> Dict[str, Tuple[Tensor, ...]]:
99-101: Fix ceiling division to avoid undercounting patch grid dimensions.The expression
(x_shape[-2] + 1) // self.patch_sizeis incorrect. Whenx_shape[-2]is exactly divisible bypatch_size, this adds an unnecessary extra row. When it's not divisible, it may undercount. Use proper ceiling division:(dim + patch_size - 1) // patch_size.Apply this diff to fix the ceiling division:
- h = (x_shape[-2] + 1) // self.patch_size - w = (x_shape[-1] + 1) // self.patch_size + h = (x_shape[-2] + self.patch_size - 1) // self.patch_size + w = (x_shape[-1] + self.patch_size - 1) // self.patch_size
109-110: Handle hint=None to prevent runtime crash.Line 109 unconditionally calls
self.pos_embed_input(hint), which will fail ifhintisNone. Thehintparameter has no default value in the signature, but callers may passNone.Apply this diff to add a None check:
- x = x + self.pos_embed_input(hint) + if hint is not None: + hint_emb = self.pos_embed_input(hint) + else: + hint_emb = torch.zeros_like(x) + x = x + hint_emb
113-119: Fix repeat logic to emit exactly main_model_double outputs.The current logic
repeat = ceil(main_model_double / num_blocks)producesnum_blocks × repeatoutputs, which often exceedsmain_model_double. For example, ifmain_model_double=10andnum_blocks=3, thenrepeat=4, yielding 12 outputs instead of 10.Apply this diff to emit exactly the required number of outputs:
- repeat = math.ceil(self.main_model_double / len(self.transformer_blocks)) + num_blocks = len(self.transformer_blocks) + base_repeat = self.main_model_double // num_blocks + extra = self.main_model_double % num_blocks for i in range(len(self.transformer_blocks)): out = self.transformer_blocks[i](x, c) if not self.double_y_emb: x = out - block_out += (self.controlnet_blocks[i](out),) * repeat + repeat_count = base_repeat + (1 if i < extra else 0) + block_out += (self.controlnet_blocks[i](out),) * repeat_countcomfy/README.md (3)
41-41: Fix heading hierarchy (MD001).The h4 heading
#### [Desktop Application]should be h3 to maintain proper hierarchy after the h2 "## Get Started" on line 39. This issue was flagged in the previous review but remains unaddressed.- #### [Desktop Application](https://www.comfy.org/download) + ### [Desktop Application](https://www.comfy.org/download)
58-86: Standardize list indentation to 2 spaces (MD007).All nested bullet points in the model groupings (Image Models, Image Editing Models, Video Models, Audio Models, 3D Models) use 3-space indentation instead of the standard 2-space indentation. This issue was flagged in the previous review but persists in this revision. Convert all nested items to 2-space indentation.
Example fix for lines 58–62:
- Image Models - - SD1.x, SD2.x ([unCLIP](https://comfyanonymous.github.io/ComfyUI_examples/unclip/)) - - [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/) - - [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/) - - [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/) + - SD1.x, SD2.x ([unCLIP](https://comfyanonymous.github.io/ComfyUI_examples/unclip/)) + - [SDXL](https://comfyanonymous.github.io/ComfyUI_examples/sdxl/), [SDXL Turbo](https://comfyanonymous.github.io/ComfyUI_examples/sdturbo/) + - [Stable Cascade](https://comfyanonymous.github.io/ComfyUI_examples/stable_cascade/) + - [SD3 and SD3.5](https://comfyanonymous.github.io/ComfyUI_examples/sd3/)Apply the same fix to all remaining nested items under Image Editing Models (lines 71–74), Video Models (lines 76–81), Audio Models (lines 83–84), and 3D Models (line 86).
140-140: Remove spaces inside code span (MD038).The code span at line 140 contains internal spaces:
`Alt `+ `C`. Consolidate to a single code span or adjust spacing to comply with MD038 guidelines. This issue was flagged in the previous review but remains unaddressed.- | `Alt `+ `C` | Collapse/uncollapse selected nodes | + | `Alt + C` | Collapse/uncollapse selected nodes |comfy/app/custom_node_manager.py (1)
124-141: Address the previous review's concerns: prevent duplicate routes, sanitize module names, and disable directory listing.The issues identified in the previous review comment remain unaddressed:
Duplicate routes: The nested loop can register the same route path multiple times if a module contains multiple matching folder names (e.g., both "examples" and "workflow"). Add a
breakstatement after the first match.URL injection risk:
module_nameis used directly in the URL path without sanitization. Module names could contain characters like/,.., or other unsafe characters. Sanitize withre.sub(r"[^A-Za-z0-9._-]", "_", module_name).Directory listing: Explicitly set
show_index=Falsefor clarity and future-proofing.Apply the diff from the previous review:
-import os +import os import folder_paths import glob from aiohttp import web import json import logging +import re from functools import lru_cachefor module_name, module_dir in loadedModules: for folder_name in example_workflow_folder_names: workflows_dir = os.path.join(module_dir, folder_name) if os.path.exists(workflows_dir): if folder_name != "example_workflows": logging.debug( "Found example workflow folder '%s' for custom node '%s', consider renaming it to 'example_workflows'", folder_name, module_name) + safe_module_name = re.sub(r"[^A-Za-z0-9._-]", "_", module_name) webapp.add_routes( [ web.static( - "/api/workflow_templates/" + module_name, workflows_dir + "/api/workflow_templates/" + safe_module_name, + workflows_dir, + show_index=False, ) ] ) + break # avoid duplicate static routes for the same modulecomfy/comfy/conds.py (1)
128-137: Critical bug: Variable shadowing in size() method (previously reported).The loop variable
cat line 131 shadows the initializationc = 1at line 130, breaking the intended memory estimation logic. This bug was already flagged in a previous review but remains unfixed.The variable
cshould track the channel dimension, but the shadowing causes it to reference the last tensor object, leading to incorrect calculations or runtime errors inreturn [1, c, o // c].Apply this diff to fix the variable shadowing:
def size(self): # hackish implementation to make the mem estimation work o = 0 c = 1 - for c in self.cond: - size = c.size() + for tensor in self.cond: + size = tensor.size() o += math.prod(size) if len(size) > 1: c = size[1] return [1, c, o // c]comfy/.github/workflows/ruff.yml (1)
14-17: Update action version and pin Python version for consistency.The ruff job uses an outdated
actions/setup-python@v2and unpinnedpython-version: 3.x, creating inconsistency with the pylint job (lines 33-36) which correctly uses@v4and pins to3.12. This leads to non-deterministic CI runs and potential compatibility issues.Apply this diff to align with the pylint job:
- name: Set up Python - uses: actions/setup-python@v2 + uses: actions/setup-python@v4 with: - python-version: 3.x + python-version: '3.12'comfy/.github/workflows/release-webhook.yml (3)
12-14: Validate required secrets before use.The workflow uses
RELEASE_GITHUB_WEBHOOK_URLandRELEASE_GITHUB_WEBHOOK_SECRETwithout validating they're configured. If either secret is missing or empty, the workflow proceeds with empty values, causing silent failures or sending to invalid endpoints.Apply this diff to add validation:
- name: Send release webhook env: WEBHOOK_URL: ${{ secrets.RELEASE_GITHUB_WEBHOOK_URL }} WEBHOOK_SECRET: ${{ secrets.RELEASE_GITHUB_WEBHOOK_SECRET }} run: | + if [ -z "$WEBHOOK_URL" ] || [ -z "$WEBHOOK_SECRET" ]; then + echo "❌ Error: RELEASE_GITHUB_WEBHOOK_URL and RELEASE_GITHUB_WEBHOOK_SECRET must be configured" + exit 1 + fi + # Generate UUID for delivery ID
21-92: Escape all string fields with toJSON() to prevent JSON injection.The payload construction directly interpolates many GitHub context fields (lines 26-28, 31-32, 37-38, 40, 43-46, 49-50, plus repository and sender objects) without the
toJSON()filter. Whilename,body, andassetscorrectly usetoJSON()(lines 33-34, 51), the inconsistency creates risk: if any field contains quotes, newlines, or is null, the resulting JSON will be malformed.Apply
toJSON()consistently to all string fields:"release": { "id": ${{ github.event.release.id }}, - "node_id": "${{ github.event.release.node_id }}", - "url": "${{ github.event.release.url }}", - "html_url": "${{ github.event.release.html_url }}", - "assets_url": "${{ github.event.release.assets_url }}", - "upload_url": "${{ github.event.release.upload_url }}", - "tag_name": "${{ github.event.release.tag_name }}", - "target_commitish": "${{ github.event.release.target_commitish }}", + "node_id": ${{ toJSON(github.event.release.node_id) }}, + "url": ${{ toJSON(github.event.release.url) }}, + "html_url": ${{ toJSON(github.event.release.html_url) }}, + "assets_url": ${{ toJSON(github.event.release.assets_url) }}, + "upload_url": ${{ toJSON(github.event.release.upload_url) }}, + "tag_name": ${{ toJSON(github.event.release.tag_name) }}, + "target_commitish": ${{ toJSON(github.event.release.target_commitish) }},Apply this pattern to all string fields in the
release.author,repository,repository.owner, andsenderobjects, leaving numeric/boolean fields (likeid,draft,prerelease,site_admin) unchanged.
94-108: Add comprehensive error handling and validation.The curl command uses
--fail --silent --show-error(line 106) which provides minimal diagnostics. If the webhook secret is incorrect, the endpoint rejects the signature, or the JSON is malformed, the failure lacks clear diagnostics. There's no JSON validation, HTTP status checking, timeout safeguards, or retry logic.Apply this diff to enhance reliability:
# Generate HMAC-SHA256 signature SIGNATURE=$(echo -n "$PAYLOAD" | openssl dgst -sha256 -hmac "$WEBHOOK_SECRET" -hex | cut -d' ' -f2) + + # Validate JSON payload before sending + if ! echo "$PAYLOAD" | jq empty 2>/dev/null; then + echo "❌ Error: Generated payload is not valid JSON" + echo "$PAYLOAD" + exit 1 + fi # Send webhook with required headers - curl -X POST "$WEBHOOK_URL" \ + HTTP_CODE=$(curl -w "%{http_code}" -X POST "$WEBHOOK_URL" \ -H "Content-Type: application/json" \ -H "X-GitHub-Event: release" \ -H "X-GitHub-Delivery: $DELIVERY_ID" \ -H "X-GitHub-Hook-ID: $HOOK_ID" \ -H "X-Hub-Signature-256: sha256=$SIGNATURE" \ -H "User-Agent: GitHub-Actions-Webhook/1.0" \ -d "$PAYLOAD" \ - --fail --silent --show-error + --silent --show-error --connect-timeout 10 --max-time 30 -o /tmp/webhook_response.txt) + + if [ "$HTTP_CODE" != "200" ] && [ "$HTTP_CODE" != "201" ] && [ "$HTTP_CODE" != "204" ]; then + echo "❌ Error: Webhook delivery failed with HTTP $HTTP_CODE" + cat /tmp/webhook_response.txt + exit 1 + fi echo "✅ Release webhook sent successfully"This adds JSON validation with
jq, HTTP status checking, connection/request timeouts, and response logging for debugging webhook delivery failures.comfy/.github/workflows/update-api-stubs.yml (1)
44-56: Update base branch tomainand fix typo.As noted in the previous review, the repository's default branch is
main, but line 56 targetsmaster. Additionally, line 53 contains a typo: "a Github" should be "a GitHub" (capital H).Apply this diff:
body: | This PR updates the API models based on the latest api.comfy.org OpenAPI specification. - Generated automatically by the a Github workflow. + Generated automatically by a GitHub workflow. branch: update-api-stubs delete-branch: true - base: master + base: maincomfy/.ci/update_windows/update.py (1)
63-71: Add fallback formainbranch and avoid bare except.As noted in the previous review, this code assumes the repository uses a
masterbranch and doesn't attempt to fall back tomain. Additionally, the bareexcept:on line 68 silently swallows all exceptions, including KeyboardInterrupt and SystemExit, which is a bad practice.Apply this diff to handle both branch names robustly:
-print("checking out master branch") # noqa: T201 -branch = repo.lookup_branch('master') +print("checking out default branch") # noqa: T201 +branch = repo.lookup_branch('master') +if branch is None: + branch = repo.lookup_branch('main') + if branch is None: + branch_name = 'master' try: - ref = repo.lookup_reference('refs/remotes/origin/master') - except: + ref = repo.lookup_reference('refs/remotes/origin/master') + except Exception: print("pulling.") # noqa: T201 - pull(repo) - ref = repo.lookup_reference('refs/remotes/origin/master') + try: + pull(repo) + ref = repo.lookup_reference('refs/remotes/origin/master') + except Exception: + print("origin/master not found; trying origin/main") # noqa: T201 + branch_name = 'main' + ref = repo.lookup_reference('refs/remotes/origin/main') + repo.checkout(ref) - branch = repo.lookup_branch('master') + branch = repo.lookup_branch(branch_name) if branch is None: - repo.create_branch('master', repo.get(ref.target)) + repo.create_branch(branch_name, repo.get(ref.target))comfy/.github/workflows/test-unit.yml (1)
10-16: Removecontinue-on-error: trueto make test failures block merges.As noted in the previous review, setting
continue-on-error: trueon line 15 means unit test failures won't fail the workflow. This defeats the purpose of CI testing and could allow bugs to be merged.Apply this diff:
test: strategy: matrix: os: [ubuntu-latest, windows-2022, macos-latest] runs-on: ${{ matrix.os }} - continue-on-error: true steps:If you need to allow failures temporarily during test stabilization, add a comment explaining why and create a tracking issue to remove it.
comfy/.github/workflows/windows_release_dependencies.yml (2)
59-59: Case-sensitive grep may miss comfyui package variants.Line 59 uses
grep -v comfyuito filter requirements, but this is case-sensitive and will not matchComfyUIor other casing variants if they exist in requirements.txt.Use case-insensitive grep:
- grep -v comfyui requirements.txt > requirements_nocomfyui.txt + grep -vi comfyui requirements.txt > requirements_nocomfyui.txt
64-64: Add error handling for directory move operation.Line 64 moves
temp_wheel_dirbut does not check if it exists or if the move succeeds. If the wheel build fails silently, the move will fail and the subsequent tar operation may create an empty archive.Add error handling:
mv temp_wheel_dir cu${{ inputs.cu }}_python_deps + if [ $? -ne 0 ]; then + echo "Failed to move temp_wheel_dir" + exit 1 + fi tar cf cu${{ inputs.cu }}_python_deps.tar cu${{ inputs.cu }}_python_depscomfy/.github/workflows/pullrequest-ci-run.yml (1)
31-37: Pin comfy-action to a stable release tag instead of @main.Using
comfy-org/comfy-action@mainpins to the latest main branch, which can introduce unexpected breaking changes. For stability and reproducibility, pin to a specific release tag.Replace
@mainwith a release tag:- uses: comfy-org/comfy-action@main + uses: comfy-org/comfy-action@v1.0.0 # Update to latest stable tagCheck the comfy-action releases page for available tags.
Also applies to: 45-52
comfy/.github/workflows/windows_release_package.yml (2)
89-89: ** Use portable 7z invocation.**Line 89 hardcodes
"C:\Program Files\7-Zip\7z.exe", which is not portable and fails if 7z is installed elsewhere. This issue was flagged in a prior review and remains unresolved.Resolve 7z invocation portability:
- "C:\Program Files\7-Zip\7z.exe" a -t7z -m0=lzma2 -mx=9 -mfb=128 -md=768m -ms=on -mf=BCJ2 ComfyUI_windows_portable.7z ComfyUI_windows_portable + 7z a -t7z -m0=lzma2 -mx=9 -mfb=128 -md=768m -ms=on -mf=BCJ2 ComfyUI_windows_portable.7z ComfyUI_windows_portable || { echo "7z compression failed"; exit 1; }This assumes 7z is on PATH; if it's not, add pre-step discovery or error handling.
93-93: ** CI test failure does not halt the job.**Line 93 runs the CI test but does not propagate the exit code, so the job continues even if the test fails. This issue was flagged in a prior review and remains unresolved.
Ensure the CI test failure stops the workflow:
- python_embeded/python.exe -s ComfyUI/main.py --quick-test-for-ci --cpu + python_embeded/python.exe -s ComfyUI/main.py --quick-test-for-ci --cpu || { echo "CI test failed"; exit 1; }comfy/.github/workflows/windows_release_nightly_pytorch.yml (2)
46-46: Add SHA256 verification for downloaded Python embed package.Like the stable-release workflow, the Python embed ZIP is downloaded without integrity verification. This creates a supply-chain security vulnerability.
Apply the same verification fix used in the stable-release workflow:
curl https://www.python.org/ftp/python/3.${{ inputs.python_minor }}.${{ inputs.python_patch }}/python-3.${{ inputs.python_minor }}.${{ inputs.python_patch }}-embed-amd64.zip -o python_embeded.zip + # Verify SHA256 checksum against official Python FTP checksums + curl -s https://www.python.org/ftp/python/3.${{ inputs.python_minor }}.${{ inputs.python_patch }}/SHA256SUMS -o SHA256SUMS + EXPECTED_HASH=$(grep "python-3.${{ inputs.python_minor }}.${{ inputs.python_patch }}-embed-amd64.zip" SHA256SUMS | cut -d' ' -f1) + ACTUAL_HASH=$(sha256sum python_embeded.zip | cut -d' ' -f1) + if [ "$EXPECTED_HASH" != "$ACTUAL_HASH" ]; then echo "SHA256 mismatch! Expected: $EXPECTED_HASH, Got: $ACTUAL_HASH"; exit 1; fi + rm SHA256SUMS
57-57: Remove unsafe dnnl.lib deletion—breaks Windows PyTorch builds.Deleting
dnnl.libwill break Windows builds with unresolved-link errors or runtime failures. Thednnl.libfile is the import library PyTorch uses to resolve oneDNN symbols. The comment "I don't think this is actually used" indicates speculative deletion without verification—unsafe for production.Use safer space-saving alternatives instead:
rm ./Lib/site-packages/torch/lib/dnnl.lib #I don't think this is actually used and I need the spaceReplace with one of these approaches:
- Option 1: Remove other safe candidates (e.g., debug symbols, unused protobuf libs).
- Option 2: Use PyTorch selective build to exclude unnecessary backends.
- Option 3: Verify oneDNN is statically linked before removing the import lib.
comfy/.github/workflows/stable-release.yml (1)
5-76: Refactor duplicated input definitions using YAML anchors to eliminate DRY violation.The
workflow_callandworkflow_dispatchsections (72 lines total) define identical input blocks. This violates DRY and makes future changes error-prone—updating one requires remembering to update the other.Apply this diff to consolidate using YAML anchors:
-on: - workflow_call: - inputs: - git_tag: - description: 'Git tag' - required: true - type: string - cache_tag: - description: 'Cached dependencies tag' - required: true - type: string - default: "cu129" - python_minor: - description: 'Python minor version' - required: true - type: string - default: "13" - python_patch: - description: 'Python patch version' - required: true - type: string - default: "6" - rel_name: - description: 'Release name' - required: true - type: string - default: "nvidia" - rel_extra_name: - description: 'Release extra name' - required: false - type: string - default: "" - test_release: - description: 'Test Release' - required: true - type: boolean - default: true - workflow_dispatch: - inputs: - git_tag: - description: 'Git tag' - required: true - type: string - cache_tag: - description: 'Cached dependencies tag' - required: true - type: string - default: "cu129" - python_minor: - description: 'Python minor version' - required: true - type: string - default: "13" - python_patch: - description: 'Python patch version' - required: true - type: string - default: "6" - rel_name: - description: 'Release name' - required: true - type: string - default: "nvidia" - rel_extra_name: - description: 'Release extra name' - required: false - type: string - default: "" - test_release: - description: 'Test Release' - required: true - type: boolean - default: true +_input_spec: &inputs + git_tag: + description: 'Git tag' + required: true + type: string + cache_tag: + description: 'Cached dependencies tag' + required: true + type: string + default: "cu129" + python_minor: + description: 'Python minor version' + required: true + type: string + default: "13" + python_patch: + description: 'Python patch version' + required: true + type: string + default: "6" + rel_name: + description: 'Release name' + required: true + type: string + default: "nvidia" + rel_extra_name: + description: 'Release extra name' + required: false + type: string + default: "" + test_release: + description: 'Test Release' + required: true + type: boolean + default: true + +on: + workflow_call: + inputs: *inputs + workflow_dispatch: + inputs: *inputscomfy/comfy/comfy_types/__init__.py (1)
13-20: TypedDict optional keys modeled incorrectly; make keys optional, not values.Use a non-total TypedDict (or NotRequired) so keys themselves are optional. Also tighten transformer_options to Mapping.
-class UnetApplyConds(TypedDict): +class UnetApplyConds(TypedDict, total=False): @@ - c_concat: Optional[torch.Tensor] - c_crossattn: Optional[torch.Tensor] - control: Optional[torch.Tensor] - transformer_options: Optional[dict] + c_concat: "torch.Tensor" + c_crossattn: "torch.Tensor" + control: "torch.Tensor" + transformer_options: Mapping[str, Any]comfy/comfy/ldm/ace/model.py (4)
87-105: Generalize unpatchify; new_height=1 breaks for other shapes.Apply this diff:
- # 4 unpatchify - new_height, new_width = 1, hidden_states.size(1) + # Unpatchify: infer grid from token count and provided width (in tokens) + new_width = max(1, width // self.patch_size[1]) + assert hidden_states.size(1) % new_width == 0, "Token count must be divisible by computed width tokens." + new_height = hidden_states.size(1) // new_width hidden_states = hidden_states.reshape( - shape=(hidden_states.shape[0], new_height, new_width, self.patch_size[0], self.patch_size[1], self.out_channels) + hidden_states.shape[0], new_height, new_width, self.patch_size[0], self.patch_size[1], self.out_channels ).contiguous() hidden_states = torch.einsum("nhwpqc->nchpwq", hidden_states) - output = hidden_states.reshape( - shape=(hidden_states.shape[0], self.out_channels, new_height * self.patch_size[0], new_width * self.patch_size[1]) - ).contiguous() - if width > new_width: - output = torch.nn.functional.pad(output, (0, width - new_width, 0, 0), 'constant', 0) - elif width < new_width: - output = output[:, :, :, :width] + output = hidden_states.reshape( + hidden_states.shape[0], self.out_channels, new_height * self.patch_size[0], new_width * self.patch_size[1] + ).contiguous() + # Adjust to requested output width in pixels + req_w = width + cur_w = new_width * self.patch_size[1] + if req_w > cur_w: + output = torch.nn.functional.pad(output, (0, req_w - cur_w, 0, 0), 'constant', 0) + elif req_w < cur_w: + output = output[:, :, :, :req_w]
325-328: ControlNet support: accept List[Tensor] and align sequence length.Apply this diff:
- if block_controlnet_hidden_states is not None: - control_condi = cross_norm(hidden_states, block_controlnet_hidden_states) - hidden_states = hidden_states + control_condi * controlnet_scale + if block_controlnet_hidden_states is not None: + control = block_controlnet_hidden_states + if isinstance(control, (list, tuple)): + control = torch.stack(control, dim=0).sum(dim=0) + if control.size(1) != hidden_states.size(1): + seq = hidden_states.size(1) + if control.size(1) > seq: + control = control[:, :seq, :] + else: + pad = seq - control.size(1) + control = torch.nn.functional.pad(control, (0, 0, 0, pad), value=0.0) + control_condi = cross_norm(hidden_states, control) + hidden_states = hidden_states + control_condi * controlnet_scale
300-305: Mask construction: handle None and dtype alignment.Apply this diff:
- encoder_hidden_mask = None - if text_attention_mask is not None: - speaker_mask = torch.ones(bs, 1, device=device) - encoder_hidden_mask = torch.cat([speaker_mask, text_attention_mask, lyric_mask], dim=1) + encoder_hidden_mask = None + if text_attention_mask is not None: + mask_dtype = text_attention_mask.dtype + speaker_mask = torch.ones(bs, 1, dtype=mask_dtype, device=device) + if lyric_mask is None: + raise ValueError("lyric_mask must be provided when text_attention_mask is provided.") + encoder_hidden_mask = torch.cat( + [speaker_mask, text_attention_mask, lyric_mask.to(mask_dtype)], + dim=1 + )
317-318: Mutable default for transformer_options.Apply this diff:
- transformer_options={}, + transformer_options=None, @@ - embedded_timestep = self.timestep_embedder(self.time_proj(timestep).to(dtype=hidden_states.dtype)) + transformer_options = transformer_options or {} + embedded_timestep = self.timestep_embedder(self.time_proj(timestep).to(dtype=hidden_states.dtype))Replicate None→{} normalization in callers if needed.
comfy/comfy/ldm/ace/lyric_encoder.py (6)
12-20: Guard or fallback foroperationsdependency.Several ctors require
operationsbut don’t guard None; this will crash at import/use. Fail fast with a clear error or fallback to torch.nn.- def __init__(..., operations=None): + def __init__(..., operations=None): + if operations is None: + raise ValueError("`operations` must be provided (Comfy ops module).")Apply consistently across the listed constructors.
Also applies to: 149-156, 191-197, 617-627, 836-843, 921-947
533-556: Docstring mismatches (parameter names and return values).
- add_optional_chunk_mask docs refer to “mask” but arg is “masks”.
- LinearEmbed.forward docs claim three returns; code returns two.
Update docstrings to match actual signatures/returns.
Also applies to: 862-872
1018-1066: None-safety and typing for pad_mask.forward() treats pad_mask as optional but type hints require Tensor. Make it Optional and default to all-True when None.
- def forward( + def forward( self, xs: torch.Tensor, - pad_mask: torch.Tensor, + pad_mask: Optional[torch.Tensor], @@ - masks = None - if pad_mask is not None: - masks = pad_mask.to(torch.bool).unsqueeze(1) # (B, 1, T) + if pad_mask is None: + masks = torch.ones(xs.size(0), 1, xs.size(1), dtype=torch.bool, device=xs.device) + else: + masks = pad_mask.to(torch.bool).unsqueeze(1) # (B, 1, T)
1-1: Add upstream license header and attribution.File references upstream URL but omits the upstream license header. Include the original license notice verbatim at the top to maintain compliance.
260-271: Prevent NaNs when all positions are masked in attention.Softmax over all −inf yields NaNs. Sanitize after softmax.
- scores = scores.masked_fill(mask, -float('inf')) - attn = torch.softmax(scores, dim=-1).masked_fill( - mask, 0.0) + scores = scores.masked_fill(mask, -float('inf')) + attn = torch.softmax(scores, dim=-1) + attn = attn.masked_fill(mask, 0.0) + attn = torch.nan_to_num(attn, nan=0.0)
352-355: Initialize learnable relative-position biases.pos_bias_u/pos_bias_v are created with empty tensors and never initialized.
self.pos_bias_u = nn.Parameter(torch.empty(self.h, self.d_k, dtype=dtype, device=device)) self.pos_bias_v = nn.Parameter(torch.empty(self.h, self.d_k, dtype=dtype, device=device)) -# torch.nn.init.xavier_uniform_(self.pos_bias_u) -# torch.nn.init.xavier_uniform_(self.pos_bias_v) +torch.nn.init.xavier_uniform_(self.pos_bias_u) +torch.nn.init.xavier_uniform_(self.pos_bias_v)comfy/comfy/ldm/cascade/common.py (1)
35-35: Fix mutable default argument: transformer_options={}.Defaulting to a shared dict is a Python anti‑pattern and was flagged previously. Use None and assign inside.
- def forward(self, q, k, v, transformer_options={}): + def forward(self, q, k, v, transformer_options=None): + if transformer_options is None: + transformer_options = {} @@ - def forward(self, x, kv, self_attn=False, transformer_options={}): + def forward(self, x, kv, self_attn=False, transformer_options=None): + if transformer_options is None: + transformer_options = {} @@ - def forward(self, x, kv, transformer_options={}): + def forward(self, x, kv, transformer_options=None): + if transformer_options is None: + transformer_options = {}Also applies to: 50-50, 117-117
comfy/comfy/ldm/audio/dit.py (2)
231-235: Critical: einops.rearrange used inside nn.Sequential (init-time crash).Use layer form Rearrange(...) instead of calling rearrange(...).
+from einops.layers.torch import Rearrange @@ - linear_in = nn.Sequential( - rearrange('b n d -> b d n') if use_conv else nn.Identity(), + linear_in = nn.Sequential( + Rearrange('b n d -> b d n') if use_conv else nn.Identity(), operations.Linear(dim, inner_dim, bias = not no_bias, dtype=dtype, device=device) if not use_conv else operations.Conv1d(dim, inner_dim, conv_kernel_size, padding = (conv_kernel_size // 2), bias = not no_bias, dtype=dtype, device=device), - rearrange('b n d -> b d n') if use_conv else nn.Identity(), + Rearrange('b d n -> b n d') if use_conv else nn.Identity(), activation ) @@ self.ff = nn.Sequential( linear_in, - rearrange('b d n -> b n d') if use_conv else nn.Identity(), + Rearrange('b d n -> b n d') if use_conv else nn.Identity(), linear_out, - rearrange('b n d -> b d n') if use_conv else nn.Identity(), + Rearrange('b n d -> b d n') if use_conv else nn.Identity(), )Also applies to: 246-251
614-615: Safer kwargs access to avoid KeyError.Use
.getfor optional context.- context = kwargs["context"] + context = kwargs.get("context", None)
| - name: Commit changes | ||
| run: | | ||
| git config --local user.name "github-actions" | ||
| git config --local user.email "github-actions@github.com" | ||
| git fetch origin ${{ github.head_ref }} | ||
| git checkout -B ${{ github.head_ref }} origin/${{ github.head_ref }} | ||
| git add comfyui_version.py | ||
| git diff --quiet && git diff --staged --quiet || git commit -m "chore: Update comfyui_version.py to match pyproject.toml" | ||
| git push origin HEAD:${{ github.head_ref }} |
There was a problem hiding this comment.
Add error handling or early exit for git operations.
If git push (line 58) fails, the workflow will not fail—it will report success to GitHub. Consider adding set -e at the start of the script block or explicit error checking after the push.
Apply this diff to ensure the workflow fails if any git operation fails:
- name: Commit changes
run: |
+ set -e
git config --local user.name "github-actions"
git config --local user.email "github-actions@github.com"
git fetch origin ${{ github.head_ref }}
git checkout -B ${{ github.head_ref }} origin/${{ github.head_ref }}
git add comfyui_version.py
git diff --quiet && git diff --staged --quiet || git commit -m "chore: Update comfyui_version.py to match pyproject.toml"
git push origin HEAD:${{ github.head_ref }}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Commit changes | |
| run: | | |
| git config --local user.name "github-actions" | |
| git config --local user.email "github-actions@github.com" | |
| git fetch origin ${{ github.head_ref }} | |
| git checkout -B ${{ github.head_ref }} origin/${{ github.head_ref }} | |
| git add comfyui_version.py | |
| git diff --quiet && git diff --staged --quiet || git commit -m "chore: Update comfyui_version.py to match pyproject.toml" | |
| git push origin HEAD:${{ github.head_ref }} | |
| - name: Commit changes | |
| run: | | |
| set -e | |
| git config --local user.name "github-actions" | |
| git config --local user.email "github-actions@github.com" | |
| git fetch origin ${{ github.head_ref }} | |
| git checkout -B ${{ github.head_ref }} origin/${{ github.head_ref }} | |
| git add comfyui_version.py | |
| git diff --quiet && git diff --staged --quiet || git commit -m "chore: Update comfyui_version.py to match pyproject.toml" | |
| git push origin HEAD:${{ github.head_ref }} |
🤖 Prompt for AI Agents
In comfy/.github/workflows/update-version.yml around lines 50 to 58, the git
commands in the run block lack error handling so a failing git push will not
cause the workflow to fail; update the script to either start the block with
"set -e" (or "set -o errexit") so any command failure aborts the job, or add
explicit checks after critical git commands (e.g., test the exit status of git
push and exit non‑zero with a clear error message) to ensure the workflow fails
on git errors.
| try: | ||
| file = self.user_manager.get_request_user_filepath( | ||
| request, | ||
| "comfy.settings.json" | ||
| ) | ||
| except KeyError as e: | ||
| logging.error("User settings not found.") | ||
| raise web.HTTPUnauthorized() from e |
There was a problem hiding this comment.
🧩 Analysis chain
Verify HTTP status code semantics and improve error message.
Two concerns:
-
HTTP 401 Unauthorized is typically reserved for missing or invalid authentication credentials. If
KeyErrorindicates a missing user configuration rather than an authentication failure, consider returning 500 Internal Server Error or 404 Not Found instead. -
The log message
"User settings not found."is generic and doesn't capture the actual error context. Consider including the exception message or request details for better diagnostics.
Run this script to verify the implementation of get_request_user_filepath and determine the appropriate status code:
🏁 Script executed:
#!/bin/bash
# Description: Find the implementation of get_request_user_filepath to understand when KeyError is raised
# Search for the method definition
ast-grep --pattern 'def get_request_user_filepath($$$) {
$$$
}'Length of output: 0
---
🏁 Script executed:
```shell
#!/bin/bash
# Find the get_request_user_filepath method definition using rg with context
# Search for the method definition in Python files
rg -nP --type=py 'def\s+get_request_user_filepath' -C 10
Length of output: 1317
🏁 Script executed:
#!/bin/bash
# Search for KeyError raises or documentation related to user manager or request user filepath
rg -nP --type=py 'KeyError|get_request_user_filepath' -C 3 -g '!**/test/**'Length of output: 148
🏁 Script executed:
#!/bin/bash
# Look for UserManager class to understand the method better
rg -nP --type=py 'class.*[Uu]ser[Mm]anager' -A 20Length of output: 1495
🏁 Script executed:
#!/bin/bash
# View the context around the code in app_settings.py to understand the intent
cat comfy/app/app_settings.py | head -40Length of output: 1189
🏁 Script executed:
#!/bin/bash
# Search for how HTTPUnauthorized or similar status codes are used elsewhere
rg -nP --type=py 'web\.HTTP|raise.*web\.' -C 2 | head -60Length of output: 326
🏁 Script executed:
#!/bin/bash
# Check the full get_request_user_id and get_request_user_filepath implementations
sed -n '58,90p' comfy/app/user_manager.pyLength of output: 1159
Improve error handling and provide more informative error messages.
Verification reveals two confirmed issues:
-
HTTP status code semantics: The
KeyErrorfromget_request_user_filepathindicates an "Unknown user" when the client'scomfy-userheader references a user not in the system. This is a client request error (invalid user identifier), not an authentication failure. Consider returning 400 Bad Request instead of 401, or distinguish the error type in your response. -
Generic error message: The exception context is lost. The actual error is
"Unknown user: {user}"or"Unknown filepath type: {type}", but the log only shows"User settings not found."Replace withlogging.error(f"Failed to retrieve user settings: {str(e)}")to preserve debugging information.
Additionally, note that save_settings() (line 32) calls the same method without catching KeyError—consider consistent error handling across both methods.
🤖 Prompt for AI Agents
In comfy/app/app_settings.py around lines 12–19, the current KeyError handling
treats an invalid user as HTTPUnauthorized and logs a generic message; change
this to return a 400 Bad Request (or otherwise differentiate client input errors
from auth failures) when get_request_user_filepath raises KeyError, and replace
the generic log with one that includes the exception message (e.g.,
logging.error(f"Failed to retrieve user settings: {e}")); also apply the same
error handling pattern to save_settings() (around line 32) so both methods
consistently catch KeyError, return the correct status, and log the underlying
error details.
| try: | ||
| with open(file) as f: | ||
| return json.load(f) | ||
| except: | ||
| logging.error(f"The user settings file is corrupted: {file}") | ||
| return {} |
There was a problem hiding this comment.
Replace bare except: with specific exception types.
The bare except: clause on line 24 is a critical anti-pattern that catches all exceptions, including SystemExit and KeyboardInterrupt, which should never be suppressed.
Apply this diff to catch only JSON-related errors:
- try:
- with open(file) as f:
- return json.load(f)
- except:
- logging.error(f"The user settings file is corrupted: {file}")
- return {}
+ try:
+ with open(file) as f:
+ return json.load(f)
+ except (json.JSONDecodeError, OSError) as e:
+ logging.error(f"Failed to load user settings file: {e}")
+ return {}Additional concern: Logging the full file path may expose sensitive directory structure information. Consider logging only the filename or using a sanitized path representation.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| with open(file) as f: | |
| return json.load(f) | |
| except: | |
| logging.error(f"The user settings file is corrupted: {file}") | |
| return {} | |
| try: | |
| with open(file) as f: | |
| return json.load(f) | |
| except (json.JSONDecodeError, OSError) as e: | |
| logging.error(f"Failed to load user settings file: {e}") | |
| return {} |
🤖 Prompt for AI Agents
In comfy/app/app_settings.py around lines 21 to 26, replace the bare except with
specific exceptions: catch json.JSONDecodeError (or ValueError) for malformed
JSON and OSError (including FileNotFoundError) for file I/O errors, capture the
exception as e, and log a sanitized path (e.g., os.path.basename(file) or
another sanitized representation) plus the exception message instead of the full
file path; ensure you import json and os if not already imported.
| if not os.path.exists(locales_dir): | ||
| continue | ||
|
|
||
| for lang_dir in glob.glob(os.path.join(locales_dir, "*/")): |
There was a problem hiding this comment.
Sort language directories for deterministic traversal.
Line 64 correctly sorts custom node directories for deterministic ordering, but line 69 leaves language directories unsorted. This inconsistency can produce non-deterministic translation merge order when multiple custom nodes define overlapping translation keys.
Apply this diff to sort language directories:
- for lang_dir in glob.glob(os.path.join(locales_dir, "*/")):
+ for lang_dir in sorted(glob.glob(os.path.join(locales_dir, "*/"))):📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for lang_dir in glob.glob(os.path.join(locales_dir, "*/")): | |
| for lang_dir in sorted(glob.glob(os.path.join(locales_dir, "*/"))): |
🤖 Prompt for AI Agents
In comfy/app/custom_node_manager.py around line 69, the loop over language
directories uses glob.glob(os.path.join(locales_dir, "*/")) without sorting,
causing non-deterministic traversal and merge order; change the iteration to use
sorted(glob.glob(os.path.join(locales_dir, "*/"))) so language directories are
processed in a deterministic, lexicographic order (matching the earlier sorted
custom node directories).
| def parse_version(version: str) -> tuple[int, int, int]: | ||
| return tuple(map(int, version.split("."))) |
There was a problem hiding this comment.
Add error handling for invalid version strings.
The parse_version function will raise a ValueError if any version component is non-numeric. While is_valid_version provides pre-validation, parse_version is also called on validated strings from other sources (e.g., line 73). Consider adding a try-except block or document that callers must validate input first.
def parse_version(version: str) -> tuple[int, int, int]:
+ try:
return tuple(map(int, version.split(".")))
+ except (ValueError, AttributeError) as e:
+ raise ValueError(f"Invalid version format: {version}") from eCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In comfy/app/frontend_management.py around lines 32-33, parse_version currently
does tuple(map(int, version.split("."))) which will raise a raw ValueError on
malformed input; wrap the conversion in a try/except catching ValueError and
raise a clearer ValueError (or a custom InvalidVersionError) that includes the
offending version string (use "from" to preserve traceback), so callers get a
descriptive error; alternatively document in the function docstring that callers
must validate input first, but preferred fix is to add the try/except and
re-raise with a meaningful message.
| def forward( | ||
| self, | ||
| x: torch.Tensor, | ||
| context: Optional[torch.Tensor] = None, | ||
| crossattn_mask: Optional[torch.Tensor] = None, | ||
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | ||
| transformer_options: Optional[dict] = {}, | ||
| ) -> torch.Tensor: | ||
| """ | ||
| Forward pass for video attention. | ||
|
|
||
| Args: | ||
| x (Tensor): Input tensor of shape (B, T, H, W, D) or (T, H, W, B, D) representing batches of video data. | ||
| context (Tensor): Context tensor of shape (B, M, D) or (M, B, D), | ||
| where M is the sequence length of the context. | ||
| crossattn_mask (Optional[Tensor]): An optional mask for cross-attention mechanisms. | ||
| rope_emb_L_1_1_D (Optional[Tensor]): | ||
| Rotary positional embedding tensor of shape (L, 1, 1, D). L == THW for current video training. | ||
|
|
||
| Returns: | ||
| Tensor: The output tensor with applied attention, maintaining the input shape. | ||
| """ | ||
|
|
||
| x_T_H_W_B_D = x | ||
| context_M_B_D = context | ||
| T, H, W, B, D = x_T_H_W_B_D.shape | ||
| x_THW_B_D = rearrange(x_T_H_W_B_D, "t h w b d -> (t h w) b d") | ||
| x_THW_B_D = self.attn( | ||
| x_THW_B_D, | ||
| context_M_B_D, | ||
| crossattn_mask, | ||
| rope_emb=rope_emb_L_1_1_D, | ||
| transformer_options=transformer_options, | ||
| ) | ||
| x_T_H_W_B_D = rearrange(x_THW_B_D, "(t h w) b d -> t h w b d", h=H, w=W) | ||
| return x_T_H_W_B_D |
There was a problem hiding this comment.
Fix mutable default argument transformer_options={}.
Apply this diff:
def forward(
self,
x: torch.Tensor,
context: Optional[torch.Tensor] = None,
crossattn_mask: Optional[torch.Tensor] = None,
rope_emb_L_1_1_D: Optional[torch.Tensor] = None,
- transformer_options: Optional[dict] = {},
+ transformer_options: Optional[dict] = None,
) -> torch.Tensor:
"""
Forward pass for video attention.
Args:
x (Tensor): Input tensor of shape (B, T, H, W, D) or (T, H, W, B, D) representing batches of video data.
context (Tensor): Context tensor of shape (B, M, D) or (M, B, D),
where M is the sequence length of the context.
crossattn_mask (Optional[Tensor]): An optional mask for cross-attention mechanisms.
rope_emb_L_1_1_D (Optional[Tensor]):
Rotary positional embedding tensor of shape (L, 1, 1, D). L == THW for current video training.
Returns:
Tensor: The output tensor with applied attention, maintaining the input shape.
"""
+ if transformer_options is None:
+ transformer_options = {}
x_T_H_W_B_D = x📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def forward( | |
| self, | |
| x: torch.Tensor, | |
| context: Optional[torch.Tensor] = None, | |
| crossattn_mask: Optional[torch.Tensor] = None, | |
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | |
| transformer_options: Optional[dict] = {}, | |
| ) -> torch.Tensor: | |
| """ | |
| Forward pass for video attention. | |
| Args: | |
| x (Tensor): Input tensor of shape (B, T, H, W, D) or (T, H, W, B, D) representing batches of video data. | |
| context (Tensor): Context tensor of shape (B, M, D) or (M, B, D), | |
| where M is the sequence length of the context. | |
| crossattn_mask (Optional[Tensor]): An optional mask for cross-attention mechanisms. | |
| rope_emb_L_1_1_D (Optional[Tensor]): | |
| Rotary positional embedding tensor of shape (L, 1, 1, D). L == THW for current video training. | |
| Returns: | |
| Tensor: The output tensor with applied attention, maintaining the input shape. | |
| """ | |
| x_T_H_W_B_D = x | |
| context_M_B_D = context | |
| T, H, W, B, D = x_T_H_W_B_D.shape | |
| x_THW_B_D = rearrange(x_T_H_W_B_D, "t h w b d -> (t h w) b d") | |
| x_THW_B_D = self.attn( | |
| x_THW_B_D, | |
| context_M_B_D, | |
| crossattn_mask, | |
| rope_emb=rope_emb_L_1_1_D, | |
| transformer_options=transformer_options, | |
| ) | |
| x_T_H_W_B_D = rearrange(x_THW_B_D, "(t h w) b d -> t h w b d", h=H, w=W) | |
| return x_T_H_W_B_D | |
| def forward( | |
| self, | |
| x: torch.Tensor, | |
| context: Optional[torch.Tensor] = None, | |
| crossattn_mask: Optional[torch.Tensor] = None, | |
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | |
| transformer_options: Optional[dict] = None, | |
| ) -> torch.Tensor: | |
| """ | |
| Forward pass for video attention. | |
| Args: | |
| x (Tensor): Input tensor of shape (B, T, H, W, D) or (T, H, W, B, D) representing batches of video data. | |
| context (Tensor): Context tensor of shape (B, M, D) or (M, B, D), | |
| where M is the sequence length of the context. | |
| crossattn_mask (Optional[Tensor]): An optional mask for cross-attention mechanisms. | |
| rope_emb_L_1_1_D (Optional[Tensor]): | |
| Rotary positional embedding tensor of shape (L, 1, 1, D). L == THW for current video training. | |
| Returns: | |
| Tensor: The output tensor with applied attention, maintaining the input shape. | |
| """ | |
| if transformer_options is None: | |
| transformer_options = {} | |
| x_T_H_W_B_D = x | |
| context_M_B_D = context | |
| T, H, W, B, D = x_T_H_W_B_D.shape | |
| x_THW_B_D = rearrange(x_T_H_W_B_D, "t h w b d -> (t h w) b d") | |
| x_THW_B_D = self.attn( | |
| x_THW_B_D, | |
| context_M_B_D, | |
| crossattn_mask, | |
| rope_emb=rope_emb_L_1_1_D, | |
| transformer_options=transformer_options, | |
| ) | |
| x_T_H_W_B_D = rearrange(x_THW_B_D, "(t h w) b d -> t h w b d", h=H, w=W) | |
| return x_T_H_W_B_D |
🤖 Prompt for AI Agents
In comfy/comfy/ldm/cosmos/blocks.py around lines 544 to 579, the forward method
uses a mutable default argument transformer_options={}, which can lead to
unexpected shared state between calls; change the signature to use
transformer_options: Optional[dict] = None and inside the function set
transformer_options = {} if transformer_options is None (preserving type and
behavior), then use that local dict for the call to self.attn.
| def __init__( | ||
| self, | ||
| block_type: str, | ||
| x_dim: int, | ||
| context_dim: Optional[int], | ||
| num_heads: int, | ||
| mlp_ratio: float = 4.0, | ||
| bias: bool = False, | ||
| mlp_dropout: float = 0.0, | ||
| qkv_norm_mode: str = "per_head", | ||
| x_format: str = "BTHWD", | ||
| use_adaln_lora: bool = False, | ||
| adaln_lora_dim: int = 256, | ||
| weight_args={}, | ||
| operations=None | ||
| ) -> None: | ||
| block_type = block_type.lower() | ||
|
|
||
| super().__init__() | ||
| self.x_format = x_format | ||
| if block_type in ["cross_attn", "ca"]: | ||
| self.block = VideoAttn( | ||
| x_dim, | ||
| context_dim, | ||
| num_heads, | ||
| bias=bias, | ||
| qkv_norm_mode=qkv_norm_mode, | ||
| x_format=self.x_format, | ||
| weight_args=weight_args, | ||
| operations=operations, | ||
| ) | ||
| elif block_type in ["full_attn", "fa"]: | ||
| self.block = VideoAttn( | ||
| x_dim, None, num_heads, bias=bias, qkv_norm_mode=qkv_norm_mode, x_format=self.x_format, weight_args=weight_args, operations=operations | ||
| ) | ||
| elif block_type in ["mlp", "ff"]: | ||
| self.block = GPT2FeedForward(x_dim, int(x_dim * mlp_ratio), dropout=mlp_dropout, bias=bias, weight_args=weight_args, operations=operations) | ||
| else: | ||
| raise ValueError(f"Unknown block type: {block_type}") | ||
|
|
||
| self.block_type = block_type | ||
| self.use_adaln_lora = use_adaln_lora | ||
|
|
||
| self.norm_state = nn.LayerNorm(x_dim, elementwise_affine=False, eps=1e-6) | ||
| self.n_adaln_chunks = 3 | ||
| if use_adaln_lora: | ||
| self.adaLN_modulation = nn.Sequential( | ||
| nn.SiLU(), | ||
| operations.Linear(x_dim, adaln_lora_dim, bias=False, **weight_args), | ||
| operations.Linear(adaln_lora_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args), | ||
| ) | ||
| else: | ||
| self.adaLN_modulation = nn.Sequential(nn.SiLU(), operations.Linear(x_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args)) |
There was a problem hiding this comment.
Fix mutable default argument weight_args={}.
Apply this diff:
def __init__(
self,
block_type: str,
x_dim: int,
context_dim: Optional[int],
num_heads: int,
mlp_ratio: float = 4.0,
bias: bool = False,
mlp_dropout: float = 0.0,
qkv_norm_mode: str = "per_head",
x_format: str = "BTHWD",
use_adaln_lora: bool = False,
adaln_lora_dim: int = 256,
- weight_args={},
+ weight_args=None,
operations=None
) -> None:
block_type = block_type.lower()
super().__init__()
+ if weight_args is None:
+ weight_args = {}
self.x_format = x_format📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__( | |
| self, | |
| block_type: str, | |
| x_dim: int, | |
| context_dim: Optional[int], | |
| num_heads: int, | |
| mlp_ratio: float = 4.0, | |
| bias: bool = False, | |
| mlp_dropout: float = 0.0, | |
| qkv_norm_mode: str = "per_head", | |
| x_format: str = "BTHWD", | |
| use_adaln_lora: bool = False, | |
| adaln_lora_dim: int = 256, | |
| weight_args={}, | |
| operations=None | |
| ) -> None: | |
| block_type = block_type.lower() | |
| super().__init__() | |
| self.x_format = x_format | |
| if block_type in ["cross_attn", "ca"]: | |
| self.block = VideoAttn( | |
| x_dim, | |
| context_dim, | |
| num_heads, | |
| bias=bias, | |
| qkv_norm_mode=qkv_norm_mode, | |
| x_format=self.x_format, | |
| weight_args=weight_args, | |
| operations=operations, | |
| ) | |
| elif block_type in ["full_attn", "fa"]: | |
| self.block = VideoAttn( | |
| x_dim, None, num_heads, bias=bias, qkv_norm_mode=qkv_norm_mode, x_format=self.x_format, weight_args=weight_args, operations=operations | |
| ) | |
| elif block_type in ["mlp", "ff"]: | |
| self.block = GPT2FeedForward(x_dim, int(x_dim * mlp_ratio), dropout=mlp_dropout, bias=bias, weight_args=weight_args, operations=operations) | |
| else: | |
| raise ValueError(f"Unknown block type: {block_type}") | |
| self.block_type = block_type | |
| self.use_adaln_lora = use_adaln_lora | |
| self.norm_state = nn.LayerNorm(x_dim, elementwise_affine=False, eps=1e-6) | |
| self.n_adaln_chunks = 3 | |
| if use_adaln_lora: | |
| self.adaLN_modulation = nn.Sequential( | |
| nn.SiLU(), | |
| operations.Linear(x_dim, adaln_lora_dim, bias=False, **weight_args), | |
| operations.Linear(adaln_lora_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args), | |
| ) | |
| else: | |
| self.adaLN_modulation = nn.Sequential(nn.SiLU(), operations.Linear(x_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args)) | |
| def __init__( | |
| self, | |
| block_type: str, | |
| x_dim: int, | |
| context_dim: Optional[int], | |
| num_heads: int, | |
| mlp_ratio: float = 4.0, | |
| bias: bool = False, | |
| mlp_dropout: float = 0.0, | |
| qkv_norm_mode: str = "per_head", | |
| x_format: str = "BTHWD", | |
| use_adaln_lora: bool = False, | |
| adaln_lora_dim: int = 256, | |
| weight_args=None, | |
| operations=None | |
| ) -> None: | |
| block_type = block_type.lower() | |
| super().__init__() | |
| if weight_args is None: | |
| weight_args = {} | |
| self.x_format = x_format | |
| if block_type in ["cross_attn", "ca"]: | |
| self.block = VideoAttn( | |
| x_dim, | |
| context_dim, | |
| num_heads, | |
| bias=bias, | |
| qkv_norm_mode=qkv_norm_mode, | |
| x_format=self.x_format, | |
| weight_args=weight_args, | |
| operations=operations, | |
| ) | |
| elif block_type in ["full_attn", "fa"]: | |
| self.block = VideoAttn( | |
| x_dim, None, num_heads, bias=bias, qkv_norm_mode=qkv_norm_mode, x_format=self.x_format, weight_args=weight_args, operations=operations | |
| ) | |
| elif block_type in ["mlp", "ff"]: | |
| self.block = GPT2FeedForward(x_dim, int(x_dim * mlp_ratio), dropout=mlp_dropout, bias=bias, weight_args=weight_args, operations=operations) | |
| else: | |
| raise ValueError(f"Unknown block type: {block_type}") | |
| self.block_type = block_type | |
| self.use_adaln_lora = use_adaln_lora | |
| self.norm_state = nn.LayerNorm(x_dim, elementwise_affine=False, eps=1e-6) | |
| self.n_adaln_chunks = 3 | |
| if use_adaln_lora: | |
| self.adaLN_modulation = nn.Sequential( | |
| nn.SiLU(), | |
| operations.Linear(x_dim, adaln_lora_dim, bias=False, **weight_args), | |
| operations.Linear(adaln_lora_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args), | |
| ) | |
| else: | |
| self.adaLN_modulation = nn.Sequential(nn.SiLU(), operations.Linear(x_dim, self.n_adaln_chunks * x_dim, bias=False, **weight_args)) |
🤖 Prompt for AI Agents
In comfy/comfy/ldm/cosmos/blocks.py around lines 609 to 661, the __init__ uses a
mutable default argument weight_args={} which can cause shared-state bugs;
change the signature to default weight_args: Optional[dict]=None and inside the
method set weight_args = {} if weight_args is None, leaving all subsequent uses
unchanged so callers and internal assignments behave the same without sharing a
single dict across instances.
| def __init__( | ||
| self, | ||
| x_dim: int, | ||
| context_dim: int, | ||
| num_heads: int, | ||
| block_config: str, | ||
| mlp_ratio: float = 4.0, | ||
| x_format: str = "BTHWD", | ||
| use_adaln_lora: bool = False, | ||
| adaln_lora_dim: int = 256, | ||
| weight_args={}, | ||
| operations=None | ||
| ): | ||
| super().__init__() | ||
| self.blocks = nn.ModuleList() | ||
| self.x_format = x_format | ||
| for block_type in block_config.split("-"): | ||
| self.blocks.append( | ||
| DITBuildingBlock( | ||
| block_type, | ||
| x_dim, | ||
| context_dim, | ||
| num_heads, | ||
| mlp_ratio, | ||
| x_format=self.x_format, | ||
| use_adaln_lora=use_adaln_lora, | ||
| adaln_lora_dim=adaln_lora_dim, | ||
| weight_args=weight_args, | ||
| operations=operations, | ||
| ) | ||
| ) |
There was a problem hiding this comment.
Fix mutable default argument weight_args={}.
Apply this diff:
def __init__(
self,
x_dim: int,
context_dim: int,
num_heads: int,
block_config: str,
mlp_ratio: float = 4.0,
x_format: str = "BTHWD",
use_adaln_lora: bool = False,
adaln_lora_dim: int = 256,
- weight_args={},
+ weight_args=None,
operations=None
):
super().__init__()
+ if weight_args is None:
+ weight_args = {}
self.blocks = nn.ModuleList()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def __init__( | |
| self, | |
| x_dim: int, | |
| context_dim: int, | |
| num_heads: int, | |
| block_config: str, | |
| mlp_ratio: float = 4.0, | |
| x_format: str = "BTHWD", | |
| use_adaln_lora: bool = False, | |
| adaln_lora_dim: int = 256, | |
| weight_args={}, | |
| operations=None | |
| ): | |
| super().__init__() | |
| self.blocks = nn.ModuleList() | |
| self.x_format = x_format | |
| for block_type in block_config.split("-"): | |
| self.blocks.append( | |
| DITBuildingBlock( | |
| block_type, | |
| x_dim, | |
| context_dim, | |
| num_heads, | |
| mlp_ratio, | |
| x_format=self.x_format, | |
| use_adaln_lora=use_adaln_lora, | |
| adaln_lora_dim=adaln_lora_dim, | |
| weight_args=weight_args, | |
| operations=operations, | |
| ) | |
| ) | |
| def __init__( | |
| self, | |
| x_dim: int, | |
| context_dim: int, | |
| num_heads: int, | |
| block_config: str, | |
| mlp_ratio: float = 4.0, | |
| x_format: str = "BTHWD", | |
| use_adaln_lora: bool = False, | |
| adaln_lora_dim: int = 256, | |
| weight_args=None, | |
| operations=None | |
| ): | |
| super().__init__() | |
| if weight_args is None: | |
| weight_args = {} | |
| self.blocks = nn.ModuleList() | |
| self.x_format = x_format | |
| for block_type in block_config.split("-"): | |
| self.blocks.append( | |
| DITBuildingBlock( | |
| block_type, | |
| x_dim, | |
| context_dim, | |
| num_heads, | |
| mlp_ratio, | |
| x_format=self.x_format, | |
| use_adaln_lora=use_adaln_lora, | |
| adaln_lora_dim=adaln_lora_dim, | |
| weight_args=weight_args, | |
| operations=operations, | |
| ) | |
| ) |
🤖 Prompt for AI Agents
In comfy/comfy/ldm/cosmos/blocks.py around lines 753 to 783, the __init__ uses a
mutable default argument weight_args={} which can cause shared-state bugs;
change the signature to weight_args: Optional[dict] = None (or weight_args=None)
and inside the constructor set weight_args = {} if weight_args is None before it
is used, and update any type hints/imports as needed.
| def forward( | ||
| self, | ||
| x: torch.Tensor, | ||
| emb_B_D: torch.Tensor, | ||
| crossattn_emb: torch.Tensor, | ||
| crossattn_mask: Optional[torch.Tensor] = None, | ||
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | ||
| adaln_lora_B_3D: Optional[torch.Tensor] = None, | ||
| transformer_options: Optional[dict] = {}, | ||
| ) -> torch.Tensor: | ||
| for block in self.blocks: | ||
| x = block( | ||
| x, | ||
| emb_B_D, | ||
| crossattn_emb, | ||
| crossattn_mask, | ||
| rope_emb_L_1_1_D=rope_emb_L_1_1_D, | ||
| adaln_lora_B_3D=adaln_lora_B_3D, | ||
| transformer_options=transformer_options, | ||
| ) | ||
| return x |
There was a problem hiding this comment.
Fix mutable default argument transformer_options={}.
Apply this diff:
def forward(
self,
x: torch.Tensor,
emb_B_D: torch.Tensor,
crossattn_emb: torch.Tensor,
crossattn_mask: Optional[torch.Tensor] = None,
rope_emb_L_1_1_D: Optional[torch.Tensor] = None,
adaln_lora_B_3D: Optional[torch.Tensor] = None,
- transformer_options: Optional[dict] = {},
+ transformer_options: Optional[dict] = None,
) -> torch.Tensor:
+ if transformer_options is None:
+ transformer_options = {}
for block in self.blocks:📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def forward( | |
| self, | |
| x: torch.Tensor, | |
| emb_B_D: torch.Tensor, | |
| crossattn_emb: torch.Tensor, | |
| crossattn_mask: Optional[torch.Tensor] = None, | |
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | |
| adaln_lora_B_3D: Optional[torch.Tensor] = None, | |
| transformer_options: Optional[dict] = {}, | |
| ) -> torch.Tensor: | |
| for block in self.blocks: | |
| x = block( | |
| x, | |
| emb_B_D, | |
| crossattn_emb, | |
| crossattn_mask, | |
| rope_emb_L_1_1_D=rope_emb_L_1_1_D, | |
| adaln_lora_B_3D=adaln_lora_B_3D, | |
| transformer_options=transformer_options, | |
| ) | |
| return x | |
| def forward( | |
| self, | |
| x: torch.Tensor, | |
| emb_B_D: torch.Tensor, | |
| crossattn_emb: torch.Tensor, | |
| crossattn_mask: Optional[torch.Tensor] = None, | |
| rope_emb_L_1_1_D: Optional[torch.Tensor] = None, | |
| adaln_lora_B_3D: Optional[torch.Tensor] = None, | |
| transformer_options: Optional[dict] = None, | |
| ) -> torch.Tensor: | |
| if transformer_options is None: | |
| transformer_options = {} | |
| for block in self.blocks: | |
| x = block( | |
| x, | |
| emb_B_D, | |
| crossattn_emb, | |
| crossattn_mask, | |
| rope_emb_L_1_1_D=rope_emb_L_1_1_D, | |
| adaln_lora_B_3D=adaln_lora_B_3D, | |
| transformer_options=transformer_options, | |
| ) | |
| return x |
🤖 Prompt for AI Agents
In comfy/comfy/ldm/cosmos/blocks.py around lines 785 to 805, the forward method
uses a mutable default argument transformer_options={} which can lead to
unintended shared state; change the signature to use transformer_options:
Optional[dict] = None and at the start of the method set transformer_options =
{} if transformer_options is None, then proceed to pass transformer_options to
each block call unchanged; update the type hint to Optional[dict] to match the
new default.
|
|
||
| ### AMD GPUs (Experimental: Windows and Linux), RDNA 3, 3.5 and 4 only. | ||
|
|
||
| These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware. |
There was a problem hiding this comment.
Capitalize "windows" operating system name.
Line 224 uses lowercase "windows"; Microsoft's operating system should be capitalized to "Windows" for consistency and correctness.
- These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware.
+ These have less hardware support than the builds above but they work on Windows. You also need to install the pytorch version specific to your hardware.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| These have less hardware support than the builds above but they work on windows. You also need to install the pytorch version specific to your hardware. | |
| These have less hardware support than the builds above but they work on Windows. You also need to install the pytorch version specific to your hardware. |
🧰 Tools
🪛 LanguageTool
[uncategorized] ~224-~224: “Windows” (operating system by Microsoft) is a proper noun and needs to be capitalized.
Context: ... than the builds above but they work on windows. You also need to install the pytorch v...
(A_WINDOWS)
🤖 Prompt for AI Agents
In comfy/README.md around line 224, change the lowercase "windows" to the
correctly capitalized "Windows" in the sentence so the OS name is consistent and
correct; update the word only and keep the rest of the sentence unchanged.
Update comfy to the last github official repo version. The workflow was edited to stay compatible with the new version. All features are fonctionnal (also the others merge requests).
Summary by CodeRabbit
Release Notes
New Features
Improvements
Infrastructure