-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SageAttention on ComfyUI #11
Comments
Hi @blepping I already install like you said and got the node working but so far the speed is still the same, I test both SDXL and Flux with image size 1024x1024, i'm not sure whether the node work or not because the speed is the same. I get this notice when image finished generated: Can you give me some examples json files to check whether this node work or not Thank you |
edit: Note, this post applies to the old gist implementation. I don't recommend using that. Please use the version in my bleh node pack: https://github.com/blepping/ComfyUI-bleh @wardensc2 thanks for giving it a try. i don't think there's really a way to do it wrong in the workflow. attention improvements seem to make the most difference on large images. i didn't test with Flux (not sure if it uses the same kind of attentions or has compatible sizes). for my tests with SDXL, i got 8.94s/it with PyTorch attention and 6.71s/it using 4096x4096 resolution on a 4060Ti (about a 25% speed increase). the difference might not be big enough to see at small resolutions like 1024x1024. (think i might have been testing with smooth_k disabled - it didn't seem necessary with SDXL and should be a bit faster.) |
edit: ComfyUI users that want to use SageAttention can use the version in my bleh node pack: https://github.com/blepping/ComfyUI-bleh
See the
BlehSageAttentionSampler
node (there also is a global one but I recommend using the sampler version whenever possible).ComfyUI now also has some built-in support for SageAttention, however it will fail for models with unsupported head sizes (SD 1.5 has some, for example). I think my version has a number of improvements like letting you pass parameters to SageAttention and set a fallback attention type but I might be biased!
The text was updated successfully, but these errors were encountered: