-
Notifications
You must be signed in to change notification settings - Fork 472
Description
Hi,
First of all, thanks for the ComfyUI integration for Wan/MultiTalk – it’s awesome and super useful.
I’m trying to reproduce the specific “cowboy singing with a guitar” example shown in the MeiGen-MultiTalk demo page using your ComfyUI workflow.
Would you be willing to share the exact ComfyUI workflow JSON and settings you used for that clip? Specifically:
The ComfyUI JSON workflow file for that cowboy-with-guitar example (either attached directly or via a Gist / link).
The core generation settings, including:
Model / checkpoint(s) used
Sampler + steps
CFG / guidance scales (text + audio)
Resolution, FPS, and clip length (frames/seconds)
Seed / any fixed seeds you used
Any LoRA / VAE / extra models or special nodes you used (e.g. pre-processing nodes, motion / face refinement, post-processing, etc.).
My goal is to understand your exact configuration and replicate that specific result, not just get something “similar”. I’m happy to credit you and link back to the repo if I ever share results or a write-up.
If it’s easier, a single exported .json ComfyUI workflow file with a short note like “this is the cowboy singing workflow from the demo” would be perfect.
Thanks a lot for your time and for maintaining this wrapper!