Stable Diffusion WebUI Forge is a platform on top of Stable Diffusion WebUI (based on Gradio ) to make development easier, optimize resource management, speed up inference, and study experimental features.
The code has forked from lllyasviel , you can find more detail from there .
The code tweaked based on stable-diffusion-webui-directml which nativly support zluda on amd .
If you want learn what changes between them . I only touch files cmd_args.py
,shared_init.py
,launch_utils.py
and add zluda.py
, All those changes code came from lshqqytiger(Seunghoon Lee) , Credits should goes to lshqqytiger and lllyasviel.
Update
lshqqytiger has started a new fork of Forge, available at https://github.com/lshqqytiger/stable-diffusion-webui-amdgpu-forge. This fork provides better support and is recommended to use.
Initially, the purpose of creating a fork was to make Forge compatible with AMD before lshqqytiger's new fork. and the code from Lee .Now that Lee's fork is available, it is highly recommended to use his version for both HIP SDK 5.7 and 6.1.2.
A list of compatible GPUs can be found here. If your GPU is not on the list, then you may be need to build your own rocblas library to use ZLUDA or used builded library by others (eg, this . ( Note: how to build robclas ? follow this guide)
Also a list of builded rocblas available here ( actually most of them)
Note: If you have an integrated GPU (iGPU), you may need to disable it, or use the HIP_VISIBLE_DEVICES environment variable. OR if you IGPU exactly The apu amd 780M , download this file
For example , Place rocblas.dll
into C:\Program Files\AMD\ROCm\5.7\bin
( this fold will appear after install HIP SKD in next step,chat the path if you are using different ROCM version ,eg 6.1.2,C:\Program Files\AMD\ROCm\6.1\bin
) replace the origianl one ,replace library within rocblas\library
, the orignally library can be rename to something else , like , "origlibrary" in case for other uses.
Download HIP SDK 5.7/6.1.2
Add the HIP PATH 5.7/bin or 6.1/bin folder* and %HIP_PATH%bin to your [PATH.]
Skip zluda steps ( merge lee’s new zluda files ,it fully automatic now) Download from here Add the ZLUDA folder* and %HIP_PATH%bin to your PATH. (note, you don't need to rename zluda files cublas.dll to cublas64_11.dll ,cusparse to cusparse64_11.dll and replace the one in vevn folder like other tutorial because the zluda had already detecd in patch in script)
git clone https://github.com/likelovewant/stable-diffusion-webui-forge-on-amd.git
then start by run :
webui.bat --use-zluda
or simply click webui-user.bat
or you may try add extra
--cuda-stream --pin-shared-memory
to test the speed .
git pull
if your edited any of the file, try below.
git stash
git pull origin master
git stash pop
Then apply the change in your code editor( eg VS code) ,where git stash pop
listed.
Close command prompt and run webui-user.bat
again.
by run :
git checkout sd35
click webui-user.bat
to start program,if you want to Switch back to origin, by run :
git checkout main
Tested work on limited sd3.5 large fp 8 models with Euler, SGM_Uniform . eg.SD3.5_fp8_SS_ALL.marduk191, The speed is almost same as flux. If had vram or ram overload in recent update. run
git checkout 2a855b3d9554fa0a2ca7bd59820890063bd9bde9
It will switch to previous proper commit ,make sure to swithc back to main
before update.
Note:For some old cards ,eg rx580 (gfx803),need to downgrad to pytorch to 2.2.1 . by run
python -m venv venv
.\venv\Scripts\activate
pip install torch==2.2.1 torchvision --index-url https://download.pytorch.org/whl/cu118
then click the webui-user.bat
to start .
Tip
Flux Use on Zluda ,tested work on Flux-dev-fp8.safetensors
, and t5xxl_fp8_e4m3fn.safetensors
combination ,
Flux1-dev-Q4_K_S.gguf
,flux1-dev-Q6_K.gguf
works and others like Q2,Q3,Q5
also should works except maybe slower than no-quantized files ,.safetensors
.
A list of quantized models in GGUF file available at FLUX.1-dev-gguf .
A list of quantized T5 v1.1 XXL
encoder model in GGUF
file available at t5-v1_1-xxl-encoder-gguf.
Flux loras use need set Automatic (fp16 LoRA)
in Diffusion in Low Bits
. Test works for many loras now ,however there are some lora still failed to work.)
if there is not enough vram , you may increase your virtual memory or use quantized models like Q4 or less .
if you need build roclabs ,please get support by this guide .
Forge is currently based on SD-WebUI 1.10.1 at this commit. (Because original SD-WebUI is almost static now, Forge will sync with original WebUI every 90 days, or when important fixes.)
News are moved to this link: Click here to see the News section
Gradio 4 UI Must Read (TLDR: You need to use RIGHT MOUSE BUTTON to move canvas!)
Flux Tutorial (BitsandBytes Models, NF4, "GPU Weight", "Offload Location", "Offload Method", etc)
Forge Extension List and Extension Replacement List (Temporary)
How to solve "Connection errored out" / "Press anykey to continue ..." / etc
(Save Flux BitsandBytes UNet/Checkpoint)
LayerDiffuse Transparent Image Editing
Tell us what is missing in ControlNet Integrated
(Policy) Soft Advertisement Removal Policy
(Flux BNB NF4 / GGUF Q8_0/Q5_0/Q5_1/Q4_0/Q4_1 are all natively supported with GPU weight slider and Quene/Async Swap toggle and swap location toggle. All Flux BNB NF4 / GGUF Q8_0/Q5_0/Q4_0 have LoRA support.)
Just use this one-click installation package (with git and python included).
>>> Click Here to Download One-Click Package (CUDA 12.1 + Pytorch 2.3.1) <<<
Some other CUDA/Torch Versions:
Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended
Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work
Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments
After you download, you uncompress, use update.bat
to update, and use run.bat
to run.
Note that running update.bat
is important, otherwise you may be using a previous version with potential bugs unfixed.
If you are proficient in Git and you want to install Forge as another branch of SD-WebUI, please see here. In this way, you can reuse all SD checkpoints and all extensions you installed previously in your OG SD-WebUI, but you should know what you are doing.
If you know what you are doing, you can also install Forge using same method as SD-WebUI. (Install Git, Python, Git Clone the forge repo https://github.com/lllyasviel/stable-diffusion-webui-forge.git
and then run webui-user.bat).
You can download previous versions here.
Based on manual test one-by-one:
Component | Status | Last Test |
---|---|---|
Basic Diffusion | Normal | 2024 Aug 26 |
GPU Memory Management System | Normal | 2024 Aug 26 |
LoRAs | Normal | 2024 Aug 26 |
All Preprocessors | Normal | 2024 Aug 26 |
All ControlNets | Normal | 2024 Aug 26 |
All IP-Adapters | Normal | 2024 Aug 26 |
All Instant-IDs | Normal | 2024 July 27 |
All Reference-only Methods | Normal | 2024 July 27 |
All Integrated Extensions | Normal | 2024 July 27 |
Popular Extensions (Adetailer, etc) | Normal | 2024 July 27 |
Gradio 4 UIs | Normal | 2024 July 27 |
Gradio 4 Forge Canvas | Normal | 2024 Aug 26 |
LoRA/Checkpoint Selection UI for Gradio 4 | Normal | 2024 July 27 |
Photopea/OpenposeEditor/etc for ControlNet | Normal | 2024 July 27 |
Wacom 128 level touch pressure support for Canvas | Normal | 2024 July 15 |
Microsoft Surface touch pressure support for Canvas | Broken, pending fix | 2024 July 29 |
ControlNets (Union) | Not implemented yet, pending implementation | 2024 Aug 26 |
ControlNets (Flux) | Not implemented yet, pending implementation | 2024 Aug 26 |
API endpoints (txt2img, img2img, etc) | Normal, but pending improved Flux support | 2024 Aug 29 |
OFT LoRAs | Broken, pending fix | 2024 Sep 9 |
Feel free to open issue if anything is broken and I will take a look every several days. If I do not update this "Forge Status" then it means I cannot reproduce any problem. In that case, fresh re-install should help most.
Below are self-supported single file of all codes to implement FreeU V2.
See also extension-builtin/sd_forge_freeu/scripts/forge_freeu.py
:
import torch
import gradio as gr
from modules import scripts
def Fourier_filter(x, threshold, scale):
# FFT
x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))
B, C, H, W = x_freq.shape
mask = torch.ones((B, C, H, W), device=x.device)
crow, ccol = H // 2, W // 2
mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
x_freq = x_freq * mask
# IFFT
x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real
return x_filtered.to(x.dtype)
def patch_freeu_v2(unet_patcher, b1, b2, s1, s2):
model_channels = unet_patcher.model.diffusion_model.config["model_channels"]
scale_dict = {model_channels * 4: (b1, s1), model_channels * 2: (b2, s2)}
on_cpu_devices = {}
def output_block_patch(h, hsp, transformer_options):
scale = scale_dict.get(h.shape[1], None)
if scale is not None:
hidden_mean = h.mean(1).unsqueeze(1)
B = hidden_mean.shape[0]
hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)
h[:, :h.shape[1] // 2] = h[:, :h.shape[1] // 2] * ((scale[0] - 1) * hidden_mean + 1)
if hsp.device not in on_cpu_devices:
try:
hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
except:
print("Device", hsp.device, "does not support the torch.fft.")
on_cpu_devices[hsp.device] = True
hsp = Fourier_filter(hsp.cpu(), threshold=1, scale=scale[1]).to(hsp.device)
else:
hsp = Fourier_filter(hsp.cpu(), threshold=1, scale=scale[1]).to(hsp.device)
return h, hsp
m = unet_patcher.clone()
m.set_model_output_block_patch(output_block_patch)
return m
class FreeUForForge(scripts.Script):
sorting_priority = 12 # It will be the 12th item on UI.
def title(self):
return "FreeU Integrated"
def show(self, is_img2img):
# make this extension visible in both txt2img and img2img tab.
return scripts.AlwaysVisible
def ui(self, *args, **kwargs):
with gr.Accordion(open=False, label=self.title()):
freeu_enabled = gr.Checkbox(label='Enabled', value=False)
freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01)
freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02)
freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99)
freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)
return freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2
def process_before_every_sampling(self, p, *script_args, **kwargs):
# This will be called before every sampling.
# If you use highres fix, this will be called twice.
freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2 = script_args
if not freeu_enabled:
return
unet = p.sd_model.forge_objects.unet
unet = patch_freeu_v2(unet, freeu_b1, freeu_b2, freeu_s1, freeu_s2)
p.sd_model.forge_objects.unet = unet
# Below codes will add some logs to the texts below the image outputs on UI.
# The extra_generation_params does not influence results.
p.extra_generation_params.update(dict(
freeu_enabled=freeu_enabled,
freeu_b1=freeu_b1,
freeu_b2=freeu_b2,
freeu_s1=freeu_s1,
freeu_s2=freeu_s2,
))
return
See also Forge's Unet Implementation.
WebUI Forge is now under some constructions, and docs / UI / functionality may change with updates.