Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add xformers for ROCm support #16727

Open
wants to merge 1 commit into
base: dev
Choose a base branch
from
Open

Conversation

Looong01
Copy link

Description

Add xformers for ROCm support

Checklist:

@w-e-w w-e-w changed the base branch from master to dev December 17, 2024 07:44
@@ -399,7 +399,14 @@ def prepare_environment():
startup_timer.record("install open_clip")

if (not is_installed("xformers") or args.reinstall_xformers) and args.xformers:
run_pip(f"install -U -I --no-deps {xformers_package}", "xformers")
try:
Copy link
Collaborator

@w-e-w w-e-w Dec 17, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with rocm or AMD GPUs so this comment is based out of knowledge of the code base and not about specifics


similer to TORCH_COMMAND

gpu_info=$(lspci 2>/dev/null | grep -E "VGA|Display")
case "$gpu_info" in
*"Navi 1"*)
export HSA_OVERRIDE_GFX_VERSION=10.3.0
if [[ -z "${TORCH_COMMAND}" ]]
then
pyv="$(${python_cmd} -c 'import sys; print(f"{sys.version_info[0]}.{sys.version_info[1]:02d}")')"
# Using an old nightly compiled against rocm 5.2 for Navi1, see https://github.com/pytorch/pytorch/issues/106728#issuecomment-1749511711
if [[ $pyv == "3.8" ]]
then
export TORCH_COMMAND="pip install https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp38-cp38-linux_x86_64.whl https://download.pytorch.org/whl/nightly/rocm5.2/torchvision-0.15.0.dev20230209%2Brocm5.2-cp38-cp38-linux_x86_64.whl"
elif [[ $pyv == "3.9" ]]
then
export TORCH_COMMAND="pip install https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp39-cp39-linux_x86_64.whl https://download.pytorch.org/whl/nightly/rocm5.2/torchvision-0.15.0.dev20230209%2Brocm5.2-cp39-cp39-linux_x86_64.whl"
elif [[ $pyv == "3.10" ]]
then
export TORCH_COMMAND="pip install https://download.pytorch.org/whl/nightly/rocm5.2/torch-2.0.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl https://download.pytorch.org/whl/nightly/rocm5.2/torchvision-0.15.0.dev20230209%2Brocm5.2-cp310-cp310-linux_x86_64.whl"
else
printf "\e[1m\e[31mERROR: RX 5000 series GPUs python version must be between 3.8 and 3.10, aborting...\e[0m"
exit 1
fi
fi
;;
*"Navi 2"*) export HSA_OVERRIDE_GFX_VERSION=10.3.0
;;
*"Navi 3"*) [[ -z "${TORCH_COMMAND}" ]] && \
export TORCH_COMMAND="pip install torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.7"
;;
*"Renoir"*) export HSA_OVERRIDE_GFX_VERSION=9.0.0
printf "\n%s\n" "${delimiter}"
printf "Experimental support for Renoir: make sure to have at least 4GB of VRAM and 10GB of RAM or enable cpu mode: --use-cpu all --no-half"
printf "\n%s\n" "${delimiter}"
;;
*)
;;
esac
if ! echo "$gpu_info" | grep -q "NVIDIA";
then
if echo "$gpu_info" | grep -q "AMD" && [[ -z "${TORCH_COMMAND}" ]]
then
export TORCH_COMMAND="pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7"
elif npu-smi info 2>/dev/null
then
export TORCH_COMMAND="pip install torch==2.1.0 torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu; pip install torch_npu==2.1.0"
fi
fi

feels like this should be done in webui.sh via XFORMERS_PACKAGE environment variable when not configured and trying to run rocminfo

potential issues with the current way you do things is that it might cause people that has multi gpus break
people could have an Nvidia and AMD GPU at the same time
they could have configure it wherever you are invite environment variables to use a certain GPU which may not be the one you expect, and so the test for rocminfo could pass but they're running on Nvidia
also if people have already configured XFORMERS_PACKAGE your code would ignore what they have configured if they rocminfo check passed

@Soulreaver90
Copy link

Soulreaver90 commented Dec 22, 2024

So I went ahead and tried it and see xformers loaded, but I don't see any speed improvements at all. Is there anything else that needs to be done or is rocm support still being worked on? I am on the latest ROCM and PyTorch.

Edit: I downgraded my PyTorch to the version with ROCM 6.1 since that is what xformers 0.0.28 was built against. Realized that no matter what I did, even adding --xformers, my cross attention would still default to doggettx. I added "--force-enable-xformers" and I now see xformers in the cross attention dropdown. However when selecting it and generating anything, I get errors thrown out left and right.
Has anyone confirmed xformers for ROCM works with A1111? Is it a xformers issue or a A1111?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants