Replies: 4 comments 2 replies
-
Implementation here: https://git.mmaker.moe/mmaker/sd-webui-tome |
Beta Was this translation helpful? Give feedback.
-
This is probably important enough that it should be a built-in feature rather than an external extension. But of course it should only happen after @dbolya has published it as a module on pip. |
Beta Was this translation helpful? Give feedback.
-
Hi, original author here. Some quick suggestions:
Note that the tome patch can be applied right before generation just fine, it doesn't need to be on model load. Applying the patch is free (it just sets some class variables) and it can be applied any number of times to the same model without adverse effects (e.g., to change the parameters). The way I implemented it for testing was in the txt2img processing function, right before sampling. I think the best way to implement it would be to add one of those boxes under the generation parameters, like the controlnet plugin. |
Beta Was this translation helpful? Give feedback.
-
Not ready for Mac on Apple Silicone? |
Beta Was this translation helpful? Give feedback.
-
"Token Merging (ToMe) speeds up transformers by merging redundant tokens, which means the transformer has to do less work. We apply this to the underlying transformer blocks in Stable Diffusion in a clever way that minimizes quality loss while keeping most of the speed-up and memory benefits. ToMe for SD doesn't require training and should work out of the box for any Stable Diffusion model."
paper:
https://arxiv.org/abs/2303.17604
github:
https://github.com/dbolya/tomesd
Beta Was this translation helpful? Give feedback.
All reactions