-
Notifications
You must be signed in to change notification settings - Fork 6.6k
How to Contribute Code
Alex "mcmonkey" Goodwin edited this page Jul 19, 2024
·
8 revisions
For general improvements/bug fixes just make a pull request.
- Before doing anything, make sure your change is wanted. Make sure there's an open Feature Request or Bug Report on the issues page.
- Try to make a single pull request for each change to make reviewing easier, as opposed to large/bulky PRs.
- Especially first time contributors should focus on very simple and small tasks, and take on tougher ones after you've had a PR or two successfully pulled.
- avoid adding "sensitive" code, eg
eval(...)
, unless absolutely unavoidable - When you submit a pull request, please make sure you write a clear title and good description text.
- Description text should be detailed but concise. What issue are you addressing, how does this PR address it, what have you done to test the change, what potential concerns or side effects may apply?
- No new frontend features should be pushed to the ComfyUI/web folder. Instead they should be submitted to https://github.com/Comfy-Org/ComfyUI_frontend
- Bugfixes are still welcome in the main repo
Checklist of requirements for a PR that adds support for a new model architecture:
- Have a minimal implementation of the model code that only depends on pytorch under a license compatible with the GPL license that ComfyUI uses.
- Provide a reference image with sampling settings/seed/etc. so that we can make sure the ComfyUI implementation matches the reference one.
- Replace all attention functions with the comfyui
optimized_attention
attention function. - If you are the part of the team that authored the model architecture itself:
- Please release your primary models in
.safetensors
file format, not legacy.ckpt
pickle files. - Please include proper identifying metadata in the header of the file.
- Please release your primary models in
Example of the SDPA implementation:
def optimized_attention(q, k, v, heads, mask=None, attn_precision=None, skip_reshape=False):
if skip_reshape:
b, _, _, dim_head = q.shape
else:
b, _, dim_head = q.shape
dim_head //= heads
q, k, v = map(
lambda t: t.view(b, -1, heads, dim_head).transpose(1, 2),
(q, k, v),
)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)
out = (
out.transpose(1, 2).reshape(b, -1, heads * dim_head)
)
return out
Examples in the Code:
After that you can just ping comfyanonymous
on matrix or @comfyanon
on discord and he will take a look.