Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue downloading Stable Diffusion 3.5 Medium #366

Open
mihok opened this issue Oct 30, 2024 · 4 comments
Open

Issue downloading Stable Diffusion 3.5 Medium #366

mihok opened this issue Oct 30, 2024 · 4 comments

Comments

@mihok
Copy link

mihok commented Oct 30, 2024

This model was released yesterday, so I imagine it may have something to do with that?

Running the following command:

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --chunk-unet --convert-text-encoder --convert-vae-encoder --convert-vae-decoder --convert-safety-checker --model-version stabilityai/stable-diffusion-3.5-medium -o models/

After downloading all of the necessary files, the following error happens:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1738, in <module>
    main(args)
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1485, in main
    pipe = get_pipeline(args)
           ^^^^^^^^^^^^^^^^^^
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1470, in get_pipeline
    pipe = DiffusionPipeline.from_pretrained(model_version,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
    loaded_sub_model = load_sub_model(
                       ^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/models/modeling_utils.py", line 747, in from_pretrained
    unexpected_keys = load_model_dict_into_meta(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/models/model_loading_utils.py", line 154, in load_model_dict_into_meta
    raise ValueError(
ValueError: Cannot load /Users/mihok/.cache/huggingface/hub/models--stabilityai--stable-diffusion-3.5-medium/snapshots/4ab6c3331a7591f128a21e617f0d9d3fc7e06e42/transformer because transformer_blocks.0.norm1.linear.bias expected shape tensor(..., device='meta', size=(9216,)), but got torch.Size([13824]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.
@mihok
Copy link
Author

mihok commented Oct 30, 2024

Full console output:

Torch version 2.5.0 has not been tested with coremltools. You may run into unexpected errors. Torch 2.4.0 is the most recent version that has been tested.
Fail to import BlobReader from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
Failed to load _MLModelProxy: No module named 'coremltools.libcoremlpython'
Fail to import BlobWriter from libmilstoragepython. No module named 'coremltools.libmilstoragepython'
INFO:__main__:Initializing DiffusionPipeline with stabilityai/stable-diffusion-3.5-medium..

A mixture of fp16 and non-fp16 filenames will be loaded.
Loaded fp16 filenames:
[text_encoder_2/model.fp16.safetensors, text_encoder_3/model.fp16-00002-of-00002.safetensors, text_encoder_3/model.fp16-00001-of-00002.safetensors, text_encoder/model.fp16.safetensors]
Loaded non-fp16 filenames:
[vae copy/diffusion_pytorch_model.safetensors, vae/diffusion_pytorch_model.safetensors, transformer/diffusion_pytorch_model.safetensors
If this behavior is not expected, please check your folder structure.
Keyword arguments {'use_auth_token': True} are not expected by StableDiffusion3Pipeline and will be ignored.
Loading pipeline components...:   0%|                                                            | 0/9 [00:00<?, ?it/s]The config attributes {'dual_attention_layers': [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12], 'qk_norm': 'rms_norm'} were passed to SD3Transformer2DModel, but are not expected and will be ignored. Please verify your config.json configuration file.
Loading pipeline components...:   0%|                                                            | 0/9 [00:05<?, ?it/s]
Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1738, in <module>
    main(args)
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1485, in main
    pipe = get_pipeline(args)
           ^^^^^^^^^^^^^^^^^^
  File "/Users/mihok/Development/src/github.com/apple/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1470, in get_pipeline
    pipe = DiffusionPipeline.from_pretrained(model_version,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/pipelines/pipeline_utils.py", line 876, in from_pretrained
    loaded_sub_model = load_sub_model(
                       ^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/pipelines/pipeline_loading_utils.py", line 700, in load_sub_model
    loaded_sub_model = load_method(os.path.join(cached_folder, name), **loading_kwargs)
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
    return fn(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/models/modeling_utils.py", line 747, in from_pretrained
    unexpected_keys = load_model_dict_into_meta(
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/miniconda3/lib/python3.12/site-packages/diffusers/models/model_loading_utils.py", line 154, in load_model_dict_into_meta
    raise ValueError(
ValueError: Cannot load /Users/mihok/.cache/huggingface/hub/models--stabilityai--stable-diffusion-3.5-medium/snapshots/4ab6c3331a7591f128a21e617f0d9d3fc7e06e42/transformer because transformer_blocks.0.norm1.linear.bias expected shape tensor(..., device='meta', size=(9216,)), but got torch.Size([13824]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

@aidand-canva
Copy link

aidand-canva commented Nov 2, 2024

Yeah I'm getting the same error, but calling diffusers directly (on amazon linux boxes):

import torch
from diffusers import StableDiffusion3Pipeline

pipe = StableDiffusion3Pipeline.from_pretrained("stabilityai/stable-diffusion-3.5-medium", torch_dtype=torch.bfloat16)
pipe = pipe.to("cuda")
ValueError: Cannot load /home/coder/.cache/huggingface/hub/models--stabilityai--stable-diffusion-3.5-medium/snapshots/b940f670f0eda2d07fbb75229e779da1ad11eb80/transformer because transformer_blocks.0.norm1.linear.bias expected shape tensor(..., device='meta', size=(9216,)), but got torch.Size([13824]). If you want to instead overwrite randomly initialized weights, please make sure to pass both `low_cpu_mem_usage=False` and `ignore_mismatched_sizes=True`. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

Might not be an issue specifically with this repo

@fibrous-tendencies
Copy link

I was able to get the quantized version running after updating diffusers to the latest version.

@JingZz7
Copy link

JingZz7 commented Nov 15, 2024

Upgrade to the latest version of the 🧨 diffusers library
pip install -U diffusers
this solves my problem

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants