-
Notifications
You must be signed in to change notification settings - Fork 27.2k
Install and Run on AMD GPUs
Windows+AMD support has not officially been made for webui,
but you can install lshqqytiger's fork of webui that uses Direct-ml.
-Training currently doesn't work, yet a variety of features/extensions do, such as LoRAs and controlnet. Report issues at https://github.com/lshqqytiger/stable-diffusion-webui-directml/issues
- Install Python 3.10.6 (ticking Add to PATH), and git
- paste this line in cmd/terminal:
git clone https://github.com/lshqqytiger/stable-diffusion-webui-directml && cd stable-diffusion-webui-directml && git submodule init && git submodule update
(you can move the program folder somewhere else.) - Double-click webui-user.bat
- If it looks like it is stuck when installing or running, press enter in the terminal and it should continue.
-
COMMANDLINE_ARGS=--opt-sub-quad-attention --lowvram --disable-nan-check
-
You can add --autolaunch to auto open the url for you.
(The rest below are installation guides for linux with rocm.)
(As of 1/15/23 you can just run webui-user.sh and pytorch+rocm should be automatically installed for you.)
-
Install Python 3.10.6
-
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
-
Place stable diffusion checkpoint (model.ckpt) in the models/Stable-diffusion directory
-
For many AMD gpus you MUST Add
--precision full
--no-half
toCOMMANDLINE_ARGS=
in webui-user.sh to avoid black squares or crashing.* -
Run webui.sh
*Certain cards like the Radeon RX 6000 Series and the RX 500 Series will function normally without the option --precision full --no-half
, saving plenty of vram. (noted here.)
Execute the following:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel
# It's possible that you don't need "--precision full", dropping "--no-half" however crashes my drivers
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --precision full --no-half
In following runs you will only need to execute:
cd stable-diffusion-webui
# Optional: "git pull" to update the repository
source venv/bin/activate
# It's possible that you don't need "--precision full", dropping "--no-half" however crashes my drivers
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --precision full --no-half
The first generation after starting the WebUI might take very long, and you might see a message similar to this:
MIOpen(HIP): Warning [SQLiteBase] Missing system database file: gfx1030_40.kdb Performance may degrade. Please follow instructions to install: https://github.com/ROCmSoftwarePlatform/MIOpen#installing-miopen-kernels-package
The next generations should work with regular performance. You can follow the link in the message, and if you happen to use the same operating system, follow the steps there to fix this issue. If there is no clear way to compile or install the MIOpen kernels for your operating system, consider following the "Running inside Docker"-guide below.
Pull the latest rocm/pytorch
Docker image, start the image and attach to the container (taken from the rocm/pytorch
documentation): docker run -it --network=host --device=/dev/kfd --device=/dev/dri --group-add=video --ipc=host --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $HOME/dockerx:/dockerx rocm/pytorch
Execute the following inside the container:
cd /dockerx
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
cd stable-diffusion-webui
python -m venv venv
source venv/bin/activate
python -m pip install --upgrade pip wheel
# It's possible that you don't need "--precision full", dropping "--no-half" however crashes my drivers
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half
Following runs will only require you to restart the container, attach to it again and execute the following inside the
container: Find the container name from this listing: docker container ls --all
, select the one matching the
rocm/pytorch
image, restart it: docker container restart <container-id>
then attach to it: docker exec -it <container-id> bash
.
cd /dockerx/stable-diffusion-webui
# Optional: "git pull" to update the repository
source venv/bin/activate
# It's possible that you don't need "--precision full", dropping "--no-half" however crashes my drivers
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' REQS_FILE='requirements.txt' python launch.py --precision full --no-half
The /dockerx
folder inside the container should be accessible in your home directory under the same name.
If the web UI becomes incompatible with the pre-installed Python 3.7 version inside the Docker image, here are instructions on how to update it (assuming you have successfully followed "Running inside Docker"):
Execute the following inside the container:
apt install python3.9-full # Confirm every prompt
update-alternatives --install /usr/local/bin/python python /usr/bin/python3.9 1
echo 'PATH=/usr/local/bin:$PATH' >> ~/.bashrc
Then restart the container and attach again. If you check python --version
it should now say Python 3.9.5
or newer.
Run rm -rf /dockerx/stable-diffusion-webui/venv
inside the container and then follow the steps in "Running inside
Docker" again, skipping the git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
and using the modified
launch-command below instead:
TORCH_COMMAND='pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/rocm5.1.1' python launch.py --precision full --no-half
It's possible that you don't need "--precision full", dropping "--no-half" however it may not work for everyone.
Certain cards like the Radeon RX 6000 Series and the RX 500 Series will function normally without the option --precision full --no-half
, saving plenty of vram. (noted here.)
Always use this new launch-command from now on, also when restarting the web UI in following runs.
Install webui on Arch Linux with Arch-specific packages
and possibly other Arch-based Linux distributions (tested Feb 22 2023)
- Start with required dependencies and install
pip
sudo pacman -S python-pip
- Install
pytorch
with ROCm backend
Arch [Community] repository offers two pytorch
packages, python-pytorch-rocm
and python-pytorch-opt-rocm
. For CPUs with AVX2 instruction set support, that is, CPU microarchitectures beyond Haswell (Intel, 2013) or Excavator (AMD, 2015), install python-pytorch-opt-rocm
to benefit from performance optimizations. Otherwise install python-pytorch-rocm
:
# Install either one:
sudo pacman -S python-pytorch-rocm
sudo pacman -S python-pytorch-opt-rocm # AVX2 CPUs only
- Install
torchvision
with ROCm backend
python-torchvision-rocm
package is located in AUR. Clone the git repository and compile the package on your machine
git clone https://aur.archlinux.org/python-torchvision-rocm.git
cd python-torchvision-rocm
makepkg -si
Confirm all steps until Pacman finishes installing python-torchvision-rocm
.
Alternatively, install the python-torchvision-rocm
package with a AUR helper.
- Manually create a
venv
environment with system site-packages (this will allows access to systempytorch
andtorchvision
). Install the remaining Python dependencies
python -m venv venv --system-site-packages
source venv/bin/activate
pip install -r requirements.txt
- Create webui launch script
The Python launcher for webui needs to be run directly. In the project folder, create a new file called webui-py.sh
and paste the following code:
#!/bin/bash
python launch.py #add arguments here
Depending on the GPU model, you may need to add certain Command Line Arguments and Optimizations for webui to run properly. Also refer to the Automatic Installation section for AMD GPUs.
- Make the script executable and run webui (first start may take a bit longer)
sudo chmod +x ./webui-py.sh
./webui-py.sh
Run the following inside the project root to start webui:
source venv/bin/activate
./webui-py.sh
- GPU model has to be supported by Arch dependencies
See if your GPU is listed as a build architecture in PYTORCH_ROCM_ARCH
variable for Tourchvision and PyTorch. References for architectures can be found here. If not, consider building both packages locally or use another installation method.
- Arch dependencies (
pytorch
,torchvision
) are kept up-to-date by full system updates (pacman -Syu
) and compiling, which may not be desirable when dependency combinations with fixed versions are wished
This guide has been tested on AMD Radeon RX6800 with Python 3.10.9, ROCm 5.4.3, PyTorch 1.13.1, Torchvision 0.14.1
This is the Stable Diffusion web UI wiki. Wiki Home
Setup
- Install and run on NVidia GPUs
- Install and run on AMD GPUs
- Install and run on Apple Silicon
- Install and run on Intel Silicon (external wiki page)
- Install and run via container (i.e. Docker)
- Run via online services
Reproducing images / troubleshooting
Usage
- Features
- Command Line Arguments and Settings
- Optimizations
- Custom Filename Name and Subdirectory
- Change model folder location e.g. external disk
- User Interface Customizations
- Guides and Tutorials
Developers