Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input X contains infinity - Error when converting quantized SD-Turbo #325

Closed
indoflaven opened this issue Apr 8, 2024 · 3 comments
Closed

Comments

@indoflaven
Copy link

          Hello - I seem to be getting the same or similar error.

python -m python_coreml_stable_diffusion.torch2coreml --convert-unet --convert-text-encoder --convert-vae-decoder --convert-safety-checker --model-version stabilityai/sd-turbo -o output --quantize-nbits 6 --attention-implementation SPLIT_EINSUM_V2

Fails with this error:

INFO:__main__:Converted safety_checker
INFO:__main__:Quantizing weights to 6-bit precision
INFO:__main__:Quantizing text_encoder to 6-bit precision
INFO:__main__:Quantizing text_encoder
Running compression pass palettize_weights:   3%|██                                                                            | 10/373 [00:06<03:43,  1.63 ops/s]
Traceback (most recent call last):
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/Users/michaelhein/Documents/GitHub/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1524, in <module>
    main(args)
  File "/Users/michaelhein/Documents/GitHub/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 1369, in main
    quantize_weights(args)
  File "/Users/michaelhein/Documents/GitHub/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 147, in quantize_weights
    _quantize_weights(
  File "/Users/michaelhein/Documents/GitHub/ml-stable-diffusion/python_coreml_stable_diffusion/torch2coreml.py", line 183, in _quantize_weights
    model = ct.optimize.coreml.palettize_weights(mlmodel, config=config).save(out_path)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_post_training_quantization.py", line 268, in palettize_weights
    return _apply_graph_pass(mlmodel, weight_palettizer)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_post_training_quantization.py", line 72, in _apply_graph_pass
    graph_pass.apply(prog)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 117, in apply
    apply_block(f)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/converters/mil/mil/passes/helper.py", line 60, in wrapper
    return func(*args)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 114, in apply_block
    self.transform_op(op)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 591, in transform_op
    lut_params = self.compress(
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 559, in compress
    lut, indices = palettize_weights._get_lut_and_indices(val, mode, nbits, lut_function)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 516, in _get_lut_and_indices
    lut, indices = compress_kmeans(val, nbits)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/optimize/coreml/_quantization_passes.py", line 476, in compress_kmeans
    lut, indices = _get_kmeans_lookup_table_and_weight(nbits, val)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/coremltools/models/neural_network/quantization_utils.py", line 424, in _get_kmeans_lookup_table_and_weight
    kmeans = KMeans(
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/base.py", line 1152, in wrapper
    return fit_method(estimator, *args, **kwargs)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/cluster/_kmeans.py", line 1475, in fit
    X = self._validate_data(
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/base.py", line 605, in _validate_data
    out = check_array(X, input_name="X", **check_params)
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/utils/validation.py", line 957, in check_array
    _assert_all_finite(
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/utils/validation.py", line 122, in _assert_all_finite
    _assert_all_finite_element_wise(
  File "/opt/miniconda3/envs/coreml_stable_diffusion/lib/python3.8/site-packages/sklearn/utils/validation.py", line 171, in _assert_all_finite_element_wise
    raise ValueError(msg_err)
ValueError: Input X contains infinity or a value too large for dtype('float64').

Originally posted by @indoflaven in #246 (comment)

@atiorh
Copy link
Collaborator

atiorh commented Apr 9, 2024

#316 (comment)

@indoflaven
Copy link
Author

Downgrading to Transformers version 4.34.1 as suggested in the linked thread fixed the issue. Thanks!

@jaycoolslm
Copy link

@indoflaven when I downgrade Transformers I run into this error:
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. diffusers 0.29.0 requires huggingface-hub>=0.23.2, but you have huggingface-hub 0.17.3 which is incompatible.

as diffusers package is depended on a higher transformers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants