Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: SDXL models do img2img failed for the first time #16099

Open
4 of 6 tasks
WooXinyi opened this issue Jun 27, 2024 · 0 comments
Open
4 of 6 tasks

[Bug]: SDXL models do img2img failed for the first time #16099

WooXinyi opened this issue Jun 27, 2024 · 0 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@WooXinyi
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

SDXL models do img2img failed for the first time, but the error disappeared when SDXL models do any txt2img before img2img.

Steps to reproduce the problem

  1. start webui server
  2. use SDXL model to do img2img

What should have happened?

A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

none

Console logs

Traceback (most recent call last):
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/img2img.py", line 232, in img2img
        processed = process_images(p)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/processing.py", line 845, in process_images
        res = process_images_inner(p)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 41, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/processing.py", line 981, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/processing.py", line 1741, in sample
        samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 172, in sample_img2img
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 172, in <lambda>
        samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/venv_a100/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/venv_a100/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 269, in forward
        devices.test_for_nans(x_out, "unet")
      File "/cmb-ai-data/wuxinyi/stable-diffusion-webui/modules/devices.py", line 255, in test_for_nans
        raise NansException(message)
    modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Additional information

No response

@WooXinyi WooXinyi added the bug-report Report of a bug, yet to be confirmed label Jun 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests

1 participant