-
Notifications
You must be signed in to change notification settings - Fork 780
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ComfyUI to Gradio/Spaces blog #2553
base: main
Are you sure you want to change the base?
Conversation
Also cc @abidlabs, @asomoza and @Vaibhavs10 for viz |
For that, a minimal Gradio app would be: | ||
```py | ||
if __name__ == "__main__": | ||
# Comment out the main() call |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It took me a bit of time to understand this comment line because we do not see the ComfyUI exported Python code that contains this main()
function definition
) | ||
app.launch(share=True) | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would have added a screenshot of the generated Gradio UI that we just wrote (I know it's not a Gradio tutorial though)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, I think this can be nice and add some clarity
+ def generate_image(prompt, structure_image, style_image, depth_strength, style_strength) | ||
``` | ||
|
||
And inside the function, we need to find the hard coded values of the nodes we want, and replace it with the variables we would like to control, such as: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe that presenting the inputs + output(s) (Markdown list for instance) before the Gradio code could help (just so we're prepared to do the association between the next code block and the previous one)
valid_models = [ | ||
getattr(loader[0], 'patcher', loader[0]) | ||
for loader in model_loaders | ||
if not isinstance(loader[0], dict) and not isinstance(getattr(loader[0], 'patcher', None), dict) | ||
] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice that you found a way to automate this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me regarding ZeroGPU
One solution is to take a much simpler (meaning only one or two loading model Nodes) exported workflow as example (either for the whole blog, or only for this part) |
@@ -5227,3 +5227,15 @@ | |||
- community | |||
- research | |||
- open-source-collab | |||
|
|||
- local: run-comfyui-workflows-on-spaces | |||
title: "Run ComfyUI workflows for free with Gradio on Spaces" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider shorter title
title: "Run ComfyUI workflows for free with Gradio on Spaces" | |
title: "Run ComfyUI workflows for free on Spaces" |
|
||
## Intro | ||
|
||
In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In this tutorial I will present a step-by-step guide on how I have converted a complex ComfyUI workflow to a simple Gradio application, and how I have deployed this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and ran for free on a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. | |
In this tutorial I will present a step-by-step guide on how to convert a complex ComfyUI workflow to a simple Gradio application, and how to deploy this application on Hugging Face Spaces ZeroGPU serverless structure, which allows for it to be deployed and run for free in a serverless manner. In this tutorial, we are going to work with [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68), but you can follow the tutorial with any workflow that you would like. |
# Add a title | ||
gr.Markdown("# FLUX Style Shaping") | ||
|
||
with gr.Row(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't gr.Interface
be easier to use here?
|
||
## 3. Preparing it to run Hugging Face Spaces | ||
|
||
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. | |
Now with our Gradio demo working, we may feel tempted to just upload everything to Hugging Face Spaces. However, this would require uploading dozens of GB of models to Hugging Face, which is not only only slow but also unnecessary, as all of these models already exist on Hugging Face! |
|
||
Now with our Gradio demo working, we may feel tempted to just hit an export button and get it working on Hugging Face Spaces, however, as we have all models loaded locally, if we just exported all our folder to Spaces, we would upload dozens of GB of models on Hugging Face, which is not supported, specially as all this models should have a mirror on Hugging Face. | ||
|
||
So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, we need to first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: | |
Instead, we will first install `pip install huggingface_hub` if we don't have it already, and then we need to do the following on the top of our `app.py` file: |
hf_hub_download(repo_id="comfyanonymous/flux_text_encoders", filename="t5xxl_fp16.safetensors", local_dir="models/text_encoders/t5") | ||
``` | ||
|
||
This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will map all local models on ComfyUI to a Hugging Face version of them. Unfortunately, currently there is no way to automate this process, you gotta find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. | |
This will map all local models on ComfyUI to their Hugging Face versions. Unfortunately, currently there is no way to automate this process, you need to find the models of your workflow on Hugging Face and map it to the same ComfyUI folders that. |
|
||
If you are running models that are not on Hugging Face, you need find a way to programmatically download them to the correct folder via Python code. This will run only once when the Hugging Face Space starts. | ||
|
||
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU | |
Now, we will do one last modification to the `app.py` file, which is to include the function decoration for ZeroGPU, which will let us do inference for free! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice tutorial @apolinario! Its a bit of work, but nice to see there's a repeatable path that users can follow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
very cool! 🎉
1. Export your ComfyUI workflow using [`ComfyUI-to-Python-Extension`](https://github.com/pydn/ComfyUI-to-Python-Extension); | ||
2. Create a Gradio app for the exported Python; | ||
3. Deploy it on Hugging Face Spaces with ZeroGPU; | ||
4. Soon we'll automate this entire process; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4. Soon we'll automate this entire process; | |
4. Soon: this entire process will be automated; |
|
||
- Knowing how to run ComfyUI: this tutorial requires you to be able to grab a ComfyUI workflow and run it on your machine, installing missing nodes and finding the missing models (we do plan to automate this step soon though); | ||
- Getting the workflow you would like to export up and running (if you want to learn without a workflow in mind, feel free to get [Nathan Shipley's Flux[dev] Redux + Flux[dev] Depth ComfyUI workflow](https://gist.github.com/nathanshipley/7a9ac1901adde76feebe58d558026f68) up and running); | ||
- A little bit of coding knowledge: but I would encourage beginners to attempt to follow it, as it can be a really nice introduction to Python, Gradio and Spaces without too much prior coding knowledge needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- A little bit of coding knowledge: but I would encourage beginners to attempt to follow it, as it can be a really nice introduction to Python, Gradio and Spaces without too much prior coding knowledge needed. | |
- A little bit of coding knowledge: but I would encourage beginners to attempt to follow it, as it can be a really nice introduction to Python, Gradio and Spaces without too much prior programming knowledge needed. |
|
||
## 1. Exporting your ComfyUI workflow to run on pure Python | ||
|
||
ComfyUI is awesome, but as the name indicates, it contains a UI. But Comfy is way more than a UI, it contains it's own backend that runs on Python. As we don't want to use Comfy's node-based UI for the purposes of this tutorial, we want to export the code to be ran on pure python. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ComfyUI is awesome, but as the name indicates, it contains a UI. But Comfy is way more than a UI, it contains it's own backend that runs on Python. As we don't want to use Comfy's node-based UI for the purposes of this tutorial, we want to export the code to be ran on pure python. | |
ComfyUI is awesome, and as the name indicates, it contains a UI. But Comfy is way more than a UI, it contains it's own backend that runs on Python. As we don't want to use Comfy's node-based UI for the purposes of this tutorial, we need to export the code to be ran on pure python. |
|
||
ComfyUI is awesome, but as the name indicates, it contains a UI. But Comfy is way more than a UI, it contains it's own backend that runs on Python. As we don't want to use Comfy's node-based UI for the purposes of this tutorial, we want to export the code to be ran on pure python. | ||
|
||
Thankfully, [Peyton DeNiro](https://github.com/pydn) has created this incredible [ComfyUI-to-Python-Extension](https://github.com/pydn/ComfyUI-to-Python-Extension) for ComfyUI that will export any Comfy workflow to a python script that can run any workflow of ComfyUI with Python, not firing up the UI. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thankfully, [Peyton DeNiro](https://github.com/pydn) has created this incredible [ComfyUI-to-Python-Extension](https://github.com/pydn/ComfyUI-to-Python-Extension) for ComfyUI that will export any Comfy workflow to a python script that can run any workflow of ComfyUI with Python, not firing up the UI. | |
Thankfully, [Peyton DeNiro](https://github.com/pydn) has created this incredible [ComfyUI-to-Python-Extension](https://github.com/pydn/ComfyUI-to-Python-Extension) that exports any Comfy workflow to a python script, enabling you to run a workflow without firing up the UI. |
|
||
![comfy-to-gradio](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/comfyu-to-gradio/export_as_python_steps.png) | ||
|
||
The easiest way to install the extension is to (1) search for `ComfyUI to Python Extension` in the Custom Nodes Manager Menu of the ComfyUI Manager extension and (2) install it, then, for the option to appear, you have to go on the (3) settings on the bottom right of the UI, (4) disable the new menu and hit (5) `Save as Script`. With that, you will end up with a Python script. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The easiest way to install the extension is to (1) search for `ComfyUI to Python Extension` in the Custom Nodes Manager Menu of the ComfyUI Manager extension and (2) install it, then, for the option to appear, you have to go on the (3) settings on the bottom right of the UI, (4) disable the new menu and hit (5) `Save as Script`. With that, you will end up with a Python script. | |
The easiest way to install the extension is to (1) search for `ComfyUI to Python Extension` in the Custom Nodes Manager Menu of the ComfyUI Manager extension and (2) install it. Then, for the option to appear, you have to (3) go on the settings on the bottom right of the UI, (4) disable the new menu and (5) hit `Save as Script`. With that, you will end up with a Python script. |
) | ||
app.launch(share=True) | ||
``` | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, I think this can be nice and add some clarity
|
||
## 4. Exporting to Spaces and running on ZeroGPU | ||
|
||
Now that you have your code ready for Hugging Face Spaces, it's time to export your demo to run there. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that you have your code ready for Hugging Face Spaces, it's time to export your demo to run there. | |
The code is ready - it's time to export our demo to run on Hugging Face Spaces. |
|
||
### Fix requirements | ||
|
||
Firstly, you need to modify your `requirements.txt` to include the requirements in the `custom_nodes` folder, to add append the requirements of the nodes you want to work for this workflow to the `requirements.txt` on the root folder, as Hugging Face Spaces can only deal with a single `requirements.txt` file. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Firstly, you need to modify your `requirements.txt` to include the requirements in the `custom_nodes` folder, to add append the requirements of the nodes you want to work for this workflow to the `requirements.txt` on the root folder, as Hugging Face Spaces can only deal with a single `requirements.txt` file. | |
Firstly, you need to modify your `requirements.txt` to include the requirements in the `custom_nodes` folder. As Hugging Face Spaces require a single `requirements.txt` file, make sure to add the requirements of the nodes for this workflow to the `requirements.txt` on the root folder. |
|
||
Firstly, you need to modify your `requirements.txt` to include the requirements in the `custom_nodes` folder, to add append the requirements of the nodes you want to work for this workflow to the `requirements.txt` on the root folder, as Hugging Face Spaces can only deal with a single `requirements.txt` file. | ||
|
||
You can see the illustration below. You need to do the same process for all `custom_nodes`: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can see the illustration below. You need to do the same process for all `custom_nodes`: | |
See the illustration below, the same process needs to be repeated for all `custom_nodes`: |
|
||
### If you are not a PRO subscriber (skip this step if you are) | ||
|
||
If are not a Hugging Face PRO subscriber, you need to apply for a ZeroGPU grant, visit the Settings page of your Space and apply for a grant. Request ZeroGPU. I will grant everybody that requests a ZeroGPU grant for ComfyUI backends. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If are not a Hugging Face PRO subscriber, you need to apply for a ZeroGPU grant, visit the Settings page of your Space and apply for a grant. Request ZeroGPU. I will grant everybody that requests a ZeroGPU grant for ComfyUI backends. | |
In case you aren't a Hugging Face PRO subscriber, you need to apply for a ZeroGPU grant. You can do so easily by going on the Settings page of your Space and submitting a grant request for ZeroGPU. All ZeroGPU grant requests for Spaces with ComfyUI backends will be granted 🎉. |
Preparing the Article
md
file. You can also specifyguest
ororg
for the authors.Getting a Review
A review would be nice: @cbensimon, @pcuenca and @linoytsaban; @cbensimon , I'm unsure about the last part wrt to moving the models outside of the function, do you think there is a more elegant way to convey this? Maybe just the link to the diff without hardcoding it?