Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using the stableSR extension zluda mode will not work #441

Open
4 of 6 tasks
pinea00 opened this issue Apr 15, 2024 · 2 comments
Open
4 of 6 tasks

Using the stableSR extension zluda mode will not work #441

pinea00 opened this issue Apr 15, 2024 · 2 comments
Assignees
Labels
bug Something isn't working from extension zluda About ZLUDA

Comments

@pinea00
Copy link

pinea00 commented Apr 15, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Using A1111 ver1.7 directml works fine, but A1111 1.8 --use zluda only produces a completely black picture
How can I fix this? Thank you

Steps to reproduce the problem

1.install extension stableSR for a1111 (https://github.com/pkuliyi2015/sd-webui-stablesr)
2.start A1111 1.8 and follow guid step to genrate img
3.I try use disable FP8 mode,but it don't change.

What should have happened?

only produces a completely black picture

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-04-15-09-34.json

Console logs

venv "venv\Scripts\Python.exe"
WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
fatal: No names found, cannot describe anything.
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr  5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: 1.8.0-RC
Commit hash: 5a423d8a59fffc6d5cb2f50149ac6b09a1aaf482
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
S:\stable-diffusion-webui-zluda\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --opt-sdp-attention --disable-nan-check --theme dark --api --autolaunch
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
ControlNet preprocessor location: S:\stable-diffusion-webui-zluda\extensions\sd-webui-controlnet\annotator\downloads
2024-04-15 17:04:48,926 - ControlNet - INFO - ControlNet v1.1.443
2024-04-15 17:04:49,091 - ControlNet - INFO - ControlNet v1.1.443
watermark logo: S:\stable-diffusion-webui-zluda\extensions\sd-webui-facefusion\watermark.png
[-] sd-webui-facefusion initialized. FaceFusion 2.1.2
Loading weights [dde3b17c05] from S:\stable-diffusion-webui-directml\models\Stable-diffusion\aZovyaPhotoreal_v2FP16.safetensors
Creating model from config: S:\stable-diffusion-webui-zluda\configs\v1-inference.yaml
Applying attention optimization: sdp... done.
Model loaded in 5.9s (load weights from disk: 0.3s, create model: 0.3s, apply weights to model: 4.0s, apply fp8: 0.6s, load textual inversion embeddings: 0.2s, calculate empty prompt: 0.5s).
2024-04-15 17:05:02,194 - ControlNet - INFO - ControlNet UI callback registered.
S:\stable-diffusion-webui-zluda\extensions\infinite-zoom-automatic1111-webui\iz_helpers\ui.py:253: GradioDeprecationWarning: The `style` method is deprecated. Please set these arguments in the constructor instead.
  output_video = gr.Video(label="Output").style(width=512, height=512)
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 24.1s (prepare environment: 15.9s, initialize shared: 2.9s, load scripts: 8.0s, create ui: 6.6s, gradio launch: 0.2s, add APIs: 0.4s).
Reusing loaded model aZovyaPhotoreal_v2FP16.safetensors [dde3b17c05] to load chilloutmix_NiPrunedFp16Fix.safetensors [59ffe2243a]
Applying attention optimization: sdp... done.
Reusing loaded model dreamshaperXL_v21TurboDPMSDE.safetensors [4496b36d48] to load v2-1_768-ema-pruned.ckpt [ad2a33c361]Loading weights [ad2a33c361] from S:\stable-diffusion-webui-directml\models\Stable-diffusion\v2-1_768-ema-pruned.ckpt
Creating model from config: S:\stable-diffusion-webui-zluda\repositories\stable-diffusion-stability-ai\configs\stable-diffusion\v2-inference-v.yaml
Applying attention optimization: sdp... done.
Model loaded in 7.7s (find config: 3.6s, create model: 0.2s, apply weights to model: 2.9s, apply fp8: 0.7s, calculate empty prompt: 0.1s).
Loading VAE weights specified in settings: S:\stable-diffusion-webui-directml\models\VAE\vqgan_cfw_00011_vae_only.ckpt
Applying attention optimization: sdp... done.
VAE weights loaded.
[StableSR] Target image size: 768x768
[Tiled Diffusion] ControlNet found, support is enabled.
[Tiled Diffusion] StableSR found, support is enabled.
[Tiled Diffusion] ControlNet found, support is enabled.
[Tiled Diffusion] StableSR found, support is enabled.
[Demo Fusion] ControlNet found, support is enabled.
[Demo Fusion] StableSR found, support is enabled.
Initial prompt:1 girl, wearing a pink bridal outfit, standing under the cherry blossom tree, NIKON RAW
Translated prompt:1 girl, wearing a pink bridal outfit, standing under the cherry blossom tree, NIKON RAW, spend time:0.0
Initial negative prompt:Blurry, low resolution, unrealistic
MultiDiffusion Sampling:   0%|                                                                                                       | 0/20 [00:00<?, ?it/s]MultiDiffusion hooked into 'Euler a' sampler, Tile size: 64x64, Tile count: 4, Batch size: 4, Tile batches: 1 (ext: ContrlNet, StableSR)
 (ext: ContrlNet, StableSR)
[Tiled VAE]: the input size is tiny and unnecessary to tile.
### Encoding Real Image ###
### Phase 1 Denoising ###
Tile size: 96, Tile count: 1, Batch size: 1, Tile batches: 1, Global batch size: 1, Global batches: 1
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:27<00:00,  5.51s/it]
### Phase 2 Denoising ###█████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:27<00:00,  5.48s/it]
Tile size: 96, Tile count: 16, Batch size: 4, Tile batches: 4, Global batch size: 4, Global batches: 1

Additional information

The program for the current page --use Zluda

@lshqqytiger lshqqytiger added bug Something isn't working zluda About ZLUDA from extension labels Apr 15, 2024
@lshqqytiger lshqqytiger self-assigned this Apr 25, 2024
@CS1o
Copy link

CS1o commented Apr 26, 2024

Hey, i've tested this.
it works on stable-diffusion-webui-directml Version 1.9.3-amd with ZLUDA 3.7

The Issue you have, is that you added --disable-nan-check to the webui-user.bat (this will make output images black)
You may added this because when loading the suggested 2.1-ema-pruned checkpoint we get a Nan-exception error that suggested to add --no-half or --disable-nan-check
When adding --no-half the 2.1 model gets loaded succesfully but it will consume much more vram and the generation time can be very slow depending on the input image resolution.

Solution:
Remove --disable-nan-check
Add: --no-half
Reduce the Multidiffusion batchsize to 1-2

Also sidenote: --opt-sdp-attention doesnt help for performance in zluda right now, instead it makes upscaling awfully slow.

@pinea00
Copy link
Author

pinea00 commented Apr 28, 2024

Thanks, I have solved it ~ I found that the SDturbo version and it can work normally using --disable-nan-check
(does not use SD2.1 for stableSR↓) A1111 V1.90

RX6600 --zluda if -disable-nan-check error message will say:
Traceback (most recent call last):
File "S:\stablediffusion-webui\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "S:\stablediffusion-webui\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "S:\stablediffusion-webui\modules\img2img.py", line 230, in img2img
processed = modules.scripts.scripts_img2img.run(p, *args)
File "S:\stablediffusion-webui\modules\scripts.py", line 773, in run
processed = script.run(p, *script_args)
File "S:\stablediffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 248, in run
result: Processed = processing.process_images(p)
File "S:\stablediffusion-webui\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "S:\stablediffusion-webui\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "S:\stablediffusion-webui\modules\processing.py", line 1075, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "S:\stablediffusion-webui\extensions\sd-webui-stablesr\scripts\stablesr.py", line 223, in sample_custom
samples = sampler.sample(p, x, conditioning, unconditional_conditioning, image_conditioning=p.image_conditioning)
File "S:\stablediffusion-webui\modules\sd_samplers_kdiffusion.py", line 222, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "S:\stablediffusion-webui\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "S:\stablediffusion-webui\modules\sd_samplers_kdiffusion.py", line 222, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "S:\stablediffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "S:\stablediffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "S:\stablediffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "S:\stablediffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "S:\stablediffusion-webui\modules\sd_samplers_cfg_denoiser.py", line 269, in forward
devices.test_for_nans(x_out, "unet")
File "S:\stablediffusion-webui\modules\devices.py", line 271, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working from extension zluda About ZLUDA
Development

No branches or pull requests

3 participants