You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to install the new model, but I always get this error:
*** Error reading webui git info from C:\AI\stable-diffusion
Traceback (most recent call last):
File "C:\AI\stable-diffusion\modules\config_states.py", line 53, in get_webui_config
webui_repo = git.Repo(script_path)
File "C:\AI\stable-diffusion\venv\lib\site-packages\git\repo\base.py", line 265, in __init__
raise InvalidGitRepositoryError(epath)
git.exc.InvalidGitRepositoryError: C:\AI\stable-diffusion
---
*** Error reading webui git info from C:\AI\stable-diffusion
Traceback (most recent call last):
File "C:\AI\stable-diffusion\modules\config_states.py", line 53, in get_webui_config
webui_repo = git.Repo(script_path)
File "C:\AI\stable-diffusion\venv\lib\site-packages\git\repo\base.py", line 265, in __init__
raise InvalidGitRepositoryError(epath)
git.exc.InvalidGitRepositoryError: C:\AI\stable-diffusion
---
After, when creating model, I get the following one: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 24.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 16.55 GiB is allocated by PyTorch, and 75.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
And also, at the beginning, when starting, it throughs that one:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I'm trying to install the new model, but I always get this error:
After, when creating model, I get the following one:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 24.00 MiB. GPU 0 has a total capacty of 6.00 GiB of which 0 bytes is free. Of the allocated memory 16.55 GiB is allocated by PyTorch, and 75.75 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
And also, at the beginning, when starting, it throughs that one:
Regards
Beta Was this translation helpful? Give feedback.
All reactions