Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setting Up ComfyUI for Flux and Other Models #46

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
98 changes: 98 additions & 0 deletions advanced/get_started_with_Flux.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,98 @@
---
title: "Setting Up ComfyUI for Flux and Other Models"
description: "How to setup ComfyUI for flux and other models"
version: "English"
---

## Setting Up ComfyUI for Flux and Other Models

### Model Placement

First, let's discuss where to place different types of models in your ComfyUI folder structure:

1. **Checkpoints**: Place in `ComfyUI/models/checkpoints/`
- Examples: Flux1-dev-fp8, Flux1-schnell-fp8

2. **UNET Models**: Place in `ComfyUI/models/unet/`
- Examples: flux1-dev.safetensors, flux1-schnell.safetensors

3. **VAE Models**: Place in `ComfyUI/models/vae/`
- Example: flux_vae.safetensors

4. **CLIP Models**: Place in `ComfyUI/models/clip/`
- Examples: t5xxl_fp16.safetensors, clip_l.safetensors

5. **ControlNet Models**: Place in `ComfyUI/models/controlnet/`
- Examples: instantx_flux_canny.safetensors, flux_depth.safetensors

6. **LoRA Models**: Place in `ComfyUI/models/loras/`

7. **Upscale Models**: Place in `ComfyUI/models/upscale_models/`

## Getting Started with Flux Models

### Flux Dev Model

1. Download the Flux Dev model:
- For UNET: `flux1-dev.safetensors` (23.8GB)
- For FP8 Checkpoint: `flux1-dev-fp8` (17.2GB)

2. Place the model in the appropriate folder as mentioned above.

3. Use the following workflow example:
- For UNET version: Load the model using the "Load Diffusion Model" node.
- For FP8 Checkpoint version: Use the "Load Checkpoint" node and set CFG to 1.0.

4. Set your prompt and negative prompt in the respective nodes.

5. Connect the nodes and click "Queue Prompt" to generate an image.

### Flux Schnell Model

1. Download the Flux Schnell model:
- For UNET: `flux1-schnell.safetensors` (23.8GB)
- For FP8 Checkpoint: `flux1-schnell-fp8` (17.2GB)

2. Place the model in the appropriate folder.

3. Use a similar workflow to the Flux Dev model, but adjust for the Schnell version:
- Schnell is a distilled 4-step model, so you may need to adjust your sampling settings accordingly.

## Using ControlNet with Flux

1. Download ControlNet models for Flux, such as:
- InstantX Canny: `instantx_flux_canny.safetensors`
- Depth ControlNet: `flux_depth.safetensors`

2. Place these files in the `ComfyUI/models/controlnet/` directory.

3. In your workflow, add a "ControlNet" node and connect it to your Flux model node.

4. Load your input image and apply the appropriate preprocessing (e.g., Canny edge detection).

5. Connect the preprocessed image to the ControlNet node and adjust settings as needed.

## Tips for Using Flux and Other Recent Models

1. **Memory Management**: For lower memory usage, you can use the FP8 versions of models or set the `weight_dtype` to FP8 in the "Load Diffusion Model" node[1][5].

2. **Prompt Engineering**: Flux models excel in visual quality and image detail, particularly for text generation and complex compositions. Experiment with detailed prompts to leverage these strengths[2].

3. **Workflow Optimization**: Use the ComfyUI Manager to easily install and manage custom nodes, which can enhance your Flux workflows[3].

4. **Upscaling**: Implement the SD Ultimate upscale node for high-quality image upscaling with Flux outputs[3].

5. **Face Detailing**: For SDXL workflows, consider using the Face Detailer (DDetailer) node from the Impact Pack to improve facial details in generated images[3].

By following these guidelines and experimenting with different workflows, you'll be able to harness the power of Flux and other recent models in ComfyUI effectively. Remember to always check for the latest updates and community-shared workflows to stay current with the rapidly evolving field of AI image generation.

Citations:
[1] https://www.youtube.com/watch?v=cjWuPcRZ1j0
[2] https://comfyui-wiki.com/en/tutorial/advanced/flux1-comfyui-guide-workflow-and-examples
[3] https://stable-diffusion-art.com/comfyui/
[4] https://comfyui-wiki.com/en/interface/files
[5] https://comfyanonymous.github.io/ComfyUI_examples/flux/
[6] https://stable-diffusion-art.com/flux-comfyui/
[7] https://stablediffusion3.net/blog-how-to-use-flux-comfyui-tutorial-46551
[8] https://www.youtube.com/watch?v=RMI8WviOjIk
[9] https://www.youtube.com/watch?v=5sF5Dn5Rul8
4 changes: 4 additions & 0 deletions get_started/gettingstarted.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -21,3 +21,7 @@ style={{ width: "100%", borderRadius: "0.5rem" }}></iframe>
1. In the `Load Checkpoint` node, select the checkpoint file you just downloaded.

1. Click `Queue Prompt` and watch your image generated. Play around with the prompts to generate different images.


## [Setting Up ComfyUI for Flux and Other Models](../advanced/get_started_with_Flux.mdx)