🙏Please Add AMD Support🙏 #1117
Replies: 11 comments 37 replies
-
AMD is supported but to have the best experience you have to use it on Linux: https://github.com/comfyanonymous/ComfyUI#amd-gpus-linux-only |
Beta Was this translation helpful? Give feedback.
-
I'm only getting 2s/it on my 7900 XTX... |
Beta Was this translation helpful? Give feedback.
-
I picked up a 4090 and have happily been doing outputs at higher resolutions that would OOM on the directml backend. It's currently about 30x faster than the 7900XTX on directml was at the same resolution. One other thing that's really not helping it is that about a week ago, so over 9 months after the card was released, support for the wavefront matrix acceleration instructions on RDNA3 was finally added to the DirectX agility SDK. I'm reasonably sure that DML requires that the base directx12 installation (not agility SDK) supports something in order to use it, so who knows how long it will be before directml can make any kind of use of this acceleration which is fairly necessary to keeping speed from tanking on large matrix sizes. It's a race of the tortoises: Microsoft who ported directml to pytorch as a sort of afterthought since it's meant to be a native API, or the combination of AMD getting MIOpen done + pytorch integrating windows ROCm into builds. MIOpen's windows support merge thread has disappeared so I'm hoping that's a good sign and they're in testing right now; I'd like to be able to keep the 7900XTX around and use both cards for this sort of othing and supposedly ROCm can pass things off to CUDA when needed. Someone at AMD responded on the Windows port for MIOpen thread and said the issue with the old MIOpen.dll for their renderer is that it uses the OpenCL backend for ROCm which is much slower than HIP in current versions. Makes sense since OpenCL C code can't properly represent complex types like tensors and the compiler needs that information from the frontend so it can generate optimal GPU machine code without doing painfully expensive analysis that can still fail. They were probably just outputting it as a series of vector operations which weren't necessarily padded optimally. For the renderer that doesn't matter as much, it's not going to be calling the library tensor stuff as much as torch. If I were them I would have released it anyway to enable the pytorch team to do a port since it still contains the functionality (which then only requires a DLL swap to gain the speed advantages) in tandem with their working on it, but whatever. |
Beta Was this translation helpful? Give feedback.
-
I have a system running amd and rtx gpu's comfy is already installed how and running fine for my rtx card. I'm wondering what steps i need to take to get my amd running. After following the direction in comfy i get this: Prestartup times for custom nodes: Traceback (most recent call last): D:\ComfyUI_windows_portable>pause Step by step please |
Beta Was this translation helpful? Give feedback.
-
I don't understand why PyTorch doesn't support ROCm for windows - They only support ROCm on linux. AMD's ROCm for windows is upto 5.7 now... I don't get it, because I even have it installed. I'm just lost for words really - I'm not a programmer or a Dev so I don't understand, but people say AMD needs to pull their finger out, but from what I see, they have - its Pytorch that isn't. |
Beta Was this translation helpful? Give feedback.
-
I have rocm installed on linux with Krita ai diffusion and Comfyui but I only have a drop down option for run on CPU or run on nvidia with an AMD 7900XTX, AMD support would be nice, thanks. Or please add detailed instructions on how to add files from the rocm sdk to the local server or to comfyui.. Linux link for rocm: https://github.com/RadeonOpenCompute/ROCm Windows Link for rocm https://www.amd.com/en/developer/resources/rocm-hub/hip-sdk.html |
Beta Was this translation helpful? Give feedback.
-
OMG OMG this is actually happening!!! https://ryzenai.docs.amd.com/en/latest/index.html Aaaand it requires Python 3.9... |
Beta Was this translation helpful? Give feedback.
-
Well, I had to re-install windows and I re-installed Python 3.11 (which is 100% what I had before, because the install directory was still on my d:)... HOWEVER - I did see this appear -> pytorch-triton-rocm... and its installable on windows - What is this....? are my hopes to high? |
Beta Was this translation helpful? Give feedback.
-
Stable Diffusion has hipBLAS using ROCm - Is this an option? https://github.com/leejet/stable-diffusion.cpp/blob/master/docs%2FhipBLAS_on_Windows.md |
Beta Was this translation helpful? Give feedback.
-
It seems like EXPERT programmers really CAN'T make a .exe with 4-5 installation options. NO, After 10k hours of coding, they can't make an installer, it's 2000s technology, but somehow, they can't... ALWAYS the same thing. If you have AMD pray GOD for everything to work, because there is no explanation. Simply ONE LINE of instructions "run this"... Okay, where? How? After and before what? If you all made cars, it would be necessary to have an engineer degree to be able to dive them. Doing this you change you potential target form 30% to 0,01% of the planet. |
Beta Was this translation helpful? Give feedback.
-
comfyui developers, please make comfyui smooth on AMD graphics cards. To be honest, MBDI is too expensive for performance. By the way, AMD is offering high performance and affordable prices. So please don't make it too available for MBDI graphics cards, but make it available for AMD |
Beta Was this translation helpful? Give feedback.
-
Hi, I've been using stable diffusion for a while now and have always enjoyed making artwork and images, A while back I got into training AI models when dreambooth first came out as an extension to Auto 1111. But my gpu never had enough VRAM to do many of the things I wanted. So I decided to Upgrade my computer with a radeon 7900. It had a massive 20GB of VRAM and was an amazing switch for all my gaming and LLM activities I do. But it couldn't have been a worse mistake when it came to AI image generation. Many features are disabled on Auto 1111 such as training. and ComfyUI fully lacks support for it. I understand that many of people in the AI image generation world have a NVIDIA gpu or use a cloud service such as clipdrop. But if there is any way to add support for AMD to your todo list it would be greatly appreciated.
Beta Was this translation helpful? Give feedback.
All reactions