ValueError: expected non-negative integer #14986
Unanswered
bulutharbeli
asked this question in
Q&A
Replies: 1 comment
-
well, i met same issue. It's caused by sampling method 'DPM++ 2M SDE', and seed = -1. when the seed is set to be -1, a random value will be generated, negative or positive or 0. but the DPM++ 2M SDE need seed to be non-negative. So set seed to be non-negative will solve this problem |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
Any help will be appreciated.
`I:\AI\stable-diffusion-webui>set PYTHON=C:Python310\python.exe
I:\AI\stable-diffusion-webui>set GIT=
I:\AI\stable-diffusion-webui>set VENV_DIR=
I:\AI\stable-diffusion-webui>set COMMANDLINE_ARGS= --api --port 30000 --cors-allow-origins=https://www.painthua.com --disable-safe-unpickle --allow-code --autolaunch --update-check --theme dark --deepdanbooru --lowvram --precision full --no-half --skip-torch-cuda-test --api --xformers
I:\AI\stable-diffusion-webui>git pull
Already up to date.
I:\AI\stable-diffusion-webui>call webui.bat
venv "I:\AI\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.7.0
Commit hash: cf2772f
CUDA 11.8
You are up to date with the most recent release.
Launching Web UI with arguments: --api --port 30000 --cors-allow-origins=https://www.painthua.com --disable-safe-unpickle --allow-code --autolaunch --update-check --theme dark --deepdanbooru --lowvram --precision full --no-half --skip-torch-cuda-test --api --xformers
Moving sd_xl_base_1.0_0.9vae.safetensors from I:\AI\stable-diffusion-webui\models to I:\AI\stable-diffusion-webui\models\Stable-diffusion.
Moving sd_xl_refiner_1.0_0.9vae.safetensors from I:\AI\stable-diffusion-webui\models to I:\AI\stable-diffusion-webui\models\Stable-diffusion.
[-] ADetailer initialized. version: 24.1.2, num models: 9
[AddNet] Updating model hashes...
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 542.91it/s]
[AddNet] Updating model hashes...
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 17/17 [00:00<00:00, 1700.08it/s]
ControlNet preprocessor location: I:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\annotator\downloads
2024-02-21 15:58:09,672 - ControlNet - INFO - ControlNet v1.1.440
2024-02-21 15:58:11,024 - ControlNet - INFO - ControlNet v1.1.440
15:58:17 - ReActor - STATUS - Running v0.7.0-a2 on Device: CUDA
Loading weights [f99f3dec38] from I:\AI\stable-diffusion-webui\models\Stable-diffusion\realisticStockPhoto_v20.safetensors
2024-02-21 15:58:18,482 - AnimateDiff - INFO - Injecting LCM to UI.
2024-02-21 15:58:21,866 - AnimateDiff - INFO - Hacking i2i-batch.
2024-02-21 15:58:21,956 - ControlNet - INFO - ControlNet UI callback registered.
Creating model from config: I:\AI\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:30000
To create a public link, set
share=True
inlaunch()
.Startup time: 88.4s (prepare environment: 38.0s, import torch: 10.7s, import gradio: 2.0s, setup paths: 4.7s, import ldm: 0.1s, initialize shared: 1.9s, other imports: 5.3s, setup codeformer: 0.7s, setup gfpgan: 0.2s, list SD models: 1.3s, load scripts: 15.3s, initialize extra networks: 0.6s, create ui: 5.4s, gradio launch: 2.1s, add APIs: 0.3s).
Applying attention optimization: xformers... done.
Model loaded in 96.6s (load weights from disk: 4.9s, create model: 1.4s, apply weights to model: 79.8s, apply float(): 3.2s, move model to device: 1.3s, load textual inversion embeddings: 0.5s, calculate empty prompt: 5.3s).
*** Error completing request
*** Arguments: ('task(o04volb860mlq86)', 'cinematic film still phone photo of A young Cambodian woman standing and looking at the viewer with sad eyes, wearing a black dress with messy hair. Abandoned, destroyed old house at the background, realistic, low angle, high detail, dark lighting, volumetric, godrays, beautiful, trending on artstation,, shallow depth of field, vignette, highly detailed, high budget Hollywood film, cinemascope, moody, epic, gorgeous, lora:InstantPhotoX3:1 lora:add-detail-xl:1', 'ark, night, monochromatic, washed out, cropped, worst quality, low quality, poorly drawn, low resolution, painting, (abstract:1.3), camera, anime, cartoon, sketch, 3d, render, illustration, surrealism, ugly, gritty, simple, plain, unrealistic, impressionistic, simplistic, minimalism, bright, sunny, light, vibrant, colorful, anime, cartoon, graphic, text, painting, crayon, graphite, abstract, glitch, blur, bokeh', [], 20, 'DPM++ 2M SDE', 1, 1, 7, 1024, 1024, True, 0.64, 2, '4x_NMKD-Superscale-SP_178000_G', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', '', '', ['Model hash: f99f3dec38'], <gradio.routes.Request object at 0x000001F4B5D79960>, 0, False, '', 0.8, -3, False, -1, 0, 0, 0, True, False, False, False, 'base', False, False, {'ad_model': 'face_yolov8n.pt', 'ad_prompt': 'perfect eyes', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 32, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Euler a', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, False, False, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, 'LoRA', 'None', 0, 0, None, 'Refresh models', <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001F4B5B34F40>, UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=512, threshold_a=64, threshold_b=64, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), None, False, '0', '0', 'inswapper_128.onnx', 'CodeFormer', 1, True, 'None', 1, 1, False, True, 1, 0, 0, False, 0.5, True, False, 'CUDA', False, 0, 'None', '', None, False, False, 0.5, 0, 'from modules.processing import process_images\n\np.width = 768\np.height = 768\np.batch_size = 2\np.steps = 10\n\nreturn process_images(p)', 2, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "I:\AI\stable-diffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "I:\AI\stable-diffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "I:\AI\stable-diffusion-webui\modules\txt2img.py", line 55, in txt2img
processed = processing.process_images(p)
File "I:\AI\stable-diffusion-webui\modules\processing.py", line 734, in process_images
res = process_images_inner(p)
File "I:\AI\stable-diffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 41, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "I:\AI\stable-diffusion-webui\modules\processing.py", line 868, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "I:\AI\stable-diffusion-webui\modules\processing.py", line 1142, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "I:\AI\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 220, in sample
noise_sampler = self.create_noise_sampler(x, sigmas, p)
File "I:\AI\stable-diffusion-webui\modules\sd_samplers_common.py", line 331, in create_noise_sampler
return BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=current_iter_seeds)
File "I:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 110, in init
self.tree = BatchedBrownianTree(x, t0, t1, seed)
File "I:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 80, in init
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
File "I:\AI\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 80, in
self.trees = [torchsde.BrownianTree(t0, w0, t1, entropy=s, **kwargs) for s in seed]
File "I:\AI\stable-diffusion-webui\venv\lib\site-packages\torchsde_brownian\derived.py", line 155, in init
self._interval = brownian_interval.BrownianInterval(t0=t0,
File "I:\AI\stable-diffusion-webui\venv\lib\site-packages\torchsde_brownian\brownian_interval.py", line 551, in init
generator = np.random.SeedSequence(entropy=entropy, pool_size=pool_size)
File "bit_generator.pyx", line 307, in numpy.random.bit_generator.SeedSequence.init
File "bit_generator.pyx", line 381, in numpy.random.bit_generator.SeedSequence.get_assembled_entropy
File "bit_generator.pyx", line 138, in numpy.random.bit_generator._coerce_to_uint32_array
File "bit_generator.pyx", line 68, in numpy.random.bit_generator._int_to_uint32_array
ValueError: expected non-negative integer`
Beta Was this translation helpful? Give feedback.
All reactions