Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use zulda with stable-diffusion-webui-directml #396

Open
6 tasks
yacinesh opened this issue Feb 25, 2024 · 23 comments
Open
6 tasks

How to use zulda with stable-diffusion-webui-directml #396

yacinesh opened this issue Feb 25, 2024 · 23 comments
Labels
enhancement New feature or request question Further information is requested zluda About ZLUDA

Comments

@yacinesh
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

i tried to use zulda with this web web ui but it didn't work

Steps to reproduce the problem

/

What should have happened?

/

What browsers do you use to access the UI ?

No response

Sysinfo

/

Console logs

/

Additional information

No response

@yacinesh
Copy link
Author

@lshqqytiger can you help me here please

@lshqqytiger
Copy link
Owner

'didn't work' is not enough. Please describe your problem and attach full log.

@yacinesh
Copy link
Author

i'm getting this error when i run those COMMANDLINE_ARGS= --use-zluda --debug --autolaunch
(i've removed venv folder)
Installing torch and torchvision
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting torch==2.2.0
Downloading https://download.pytorch.org/whl/cu121/torch-2.2.0%2Bcu121-cp310-cp310-win_amd64.whl (2454.8 MB)
---------------------------------------- 2.5/2.5 GB 853.5 kB/s eta 0:00:00
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ProtocolError('Connection aborted.', RemoteDisconnected('Remote end closed connection without response'))': /simple/torchvision/
Collecting torchvision==0.17.0
Downloading https://download.pytorch.org/whl/cu121/torchvision-0.17.0%2Bcu121-cp310-cp310-win_amd64.whl (5.7 MB)
---------------------------------------- 5.7/5.7 MB 2.2 MB/s eta 0:00:00
Collecting filelock
Using cached filelock-3.13.1-py3-none-any.whl (11 kB)
Collecting typing-extensions>=4.8.0
Downloading typing_extensions-4.10.0-py3-none-any.whl (33 kB)
Collecting networkx
Downloading https://download.pytorch.org/whl/networkx-3.2.1-py3-none-any.whl (1.6 MB)
---------------------------------------- 1.6/1.6 MB 2.2 MB/s eta 0:00:00
Collecting jinja2
Downloading Jinja2-3.1.3-py3-none-any.whl (133 kB)
---------------------------------------- 133.2/133.2 kB 1.3 MB/s eta 0:00:00
Collecting fsspec
Downloading fsspec-2024.2.0-py3-none-any.whl (170 kB)
---------------------------------------- 170.9/170.9 kB 2.6 MB/s eta 0:00:00
Collecting sympy
Using cached https://download.pytorch.org/whl/sympy-1.12-py3-none-any.whl (5.7 MB)
Collecting pillow!=8.3.*,>=5.3.0
Downloading https://download.pytorch.org/whl/pillow-10.2.0-cp310-cp310-win_amd64.whl (2.6 MB)
---------------------------------------- 2.6/2.6 MB 2.2 MB/s eta 0:00:00
Collecting numpy
Downloading numpy-1.26.4-cp310-cp310-win_amd64.whl (15.8 MB)
---------------------------------------- 15.8/15.8 MB 1.6 MB/s eta 0:00:00
Collecting requests
Using cached requests-2.31.0-py3-none-any.whl (62 kB)
Collecting MarkupSafe>=2.0
Downloading MarkupSafe-2.1.5-cp310-cp310-win_amd64.whl (17 kB)
Collecting certifi>=2017.4.17
Downloading certifi-2024.2.2-py3-none-any.whl (163 kB)
---------------------------------------- 163.8/163.8 kB 1.6 MB/s eta 0:00:00
Collecting urllib3<3,>=1.21.1
Downloading urllib3-2.2.1-py3-none-any.whl (121 kB)
---------------------------------------- 121.1/121.1 kB 1.8 MB/s eta 0:00:00
Collecting charset-normalizer<4,>=2
Using cached charset_normalizer-3.3.2-cp310-cp310-win_amd64.whl (100 kB)
Collecting idna<4,>=2.5
Using cached idna-3.6-py3-none-any.whl (61 kB)
Collecting mpmath>=0.19
Using cached https://download.pytorch.org/whl/mpmath-1.3.0-py3-none-any.whl (536 kB)
Installing collected packages: mpmath, urllib3, typing-extensions, sympy, pillow, numpy, networkx, MarkupSafe, idna, fsspec, filelock, charset-normalizer, certifi, requests, jinja2, torch, torchvision
Successfully installed MarkupSafe-2.1.5 certifi-2024.2.2 charset-normalizer-3.3.2 filelock-3.13.1 fsspec-2024.2.0 idna-3.6 jinja2-3.1.3 mpmath-1.3.0 networkx-3.2.1 numpy-1.26.4 pillow-10.2.0 requests-2.31.0 sympy-1.12 torch-2.2.0+cu121 torchvision-0.17.0+cu121 typing-extensions-4.10.0 urllib3-2.2.1
WARNING: There was an error checking the latest version of pip.
Installing clip
Installing open_clip
Installing requirements for CodeFormer
Installing requirements
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
usage: launch.py [-h] [--update-all-extensions] [--skip-python-version-check] [--skip-torch-cuda-test]
[--reinstall-xformers] [--reinstall-torch] [--update-check] [--test-server] [--log-startup]
[--skip-prepare-environment] [--skip-install] [--skip-ort] [--dump-sysinfo] [--loglevel LOGLEVEL]
[--do-not-download-clip] [--data-dir DATA_DIR] [--config CONFIG] [--ckpt CKPT] [--ckpt-dir CKPT_DIR]
[--vae-dir VAE_DIR] [--gfpgan-dir GFPGAN_DIR] [--gfpgan-model GFPGAN_MODEL] [--no-half]
[--no-half-vae] [--no-progressbar-hiding] [--max-batch-count MAX_BATCH_COUNT]
[--embeddings-dir EMBEDDINGS_DIR] [--textual-inversion-templates-dir TEXTUAL_INVERSION_TEMPLATES_DIR]
[--hypernetwork-dir HYPERNETWORK_DIR] [--localizations-dir LOCALIZATIONS_DIR] [--allow-code]
[--medvram] [--medvram-sdxl] [--lowvram] [--lowram] [--always-batch-cond-uncond] [--unload-gfpgan]
[--precision {full,autocast}] [--upcast-sampling] [--share] [--ngrok NGROK]
[--ngrok-region NGROK_REGION] [--ngrok-options NGROK_OPTIONS] [--enable-insecure-extension-access]
[--codeformer-models-path CODEFORMER_MODELS_PATH] [--gfpgan-models-path GFPGAN_MODELS_PATH]
[--esrgan-models-path ESRGAN_MODELS_PATH] [--bsrgan-models-path BSRGAN_MODELS_PATH]
[--realesrgan-models-path REALESRGAN_MODELS_PATH] [--clip-models-path CLIP_MODELS_PATH] [--xformers]
[--force-enable-xformers] [--xformers-flash-attention] [--deepdanbooru] [--opt-split-attention]
[--opt-sub-quad-attention] [--sub-quad-q-chunk-size SUB_QUAD_Q_CHUNK_SIZE]
[--sub-quad-kv-chunk-size SUB_QUAD_KV_CHUNK_SIZE]
[--sub-quad-chunk-threshold SUB_QUAD_CHUNK_THRESHOLD] [--opt-split-attention-invokeai]
[--opt-split-attention-v1] [--opt-sdp-attention] [--opt-sdp-no-mem-attention]
[--disable-opt-split-attention] [--disable-nan-check] [--use-cpu USE_CPU [USE_CPU ...]]
[--use-cpu-torch] [--use-directml] [--use-ipex] [--disable-model-loading-ram-optimization] [--listen]
[--port PORT] [--show-negative-prompt] [--ui-config-file UI_CONFIG_FILE] [--hide-ui-dir-config]
[--freeze-settings] [--ui-settings-file UI_SETTINGS_FILE] [--gradio-debug]
[--gradio-auth GRADIO_AUTH] [--gradio-auth-path GRADIO_AUTH_PATH]
[--gradio-img2img-tool GRADIO_IMG2IMG_TOOL] [--gradio-inpaint-tool GRADIO_INPAINT_TOOL]
[--gradio-allowed-path GRADIO_ALLOWED_PATH] [--opt-channelslast] [--styles-file STYLES_FILE]
[--autolaunch] [--theme THEME] [--use-textbox-seed] [--disable-console-progressbars]
[--enable-console-prompts] [--vae-path VAE_PATH] [--disable-safe-unpickle] [--api]
[--api-auth API_AUTH] [--api-log] [--nowebui] [--ui-debug-mode] [--device-id DEVICE_ID]
[--administrator] [--cors-allow-origins CORS_ALLOW_ORIGINS]
[--cors-allow-origins-regex CORS_ALLOW_ORIGINS_REGEX] [--tls-keyfile TLS_KEYFILE]
[--tls-certfile TLS_CERTFILE] [--disable-tls-verify] [--server-name SERVER_NAME] [--gradio-queue]
[--no-gradio-queue] [--skip-version-check] [--no-hashing] [--no-download-sd-model]
[--subpath SUBPATH] [--add-stop-route] [--api-server-stop] [--timeout-keep-alive TIMEOUT_KEEP_ALIVE]
[--disable-all-extensions] [--disable-extra-extensions] [--skip-load-model-at-start]
[--controlnet-dir CONTROLNET_DIR]
[--controlnet-annotator-models-path CONTROLNET_ANNOTATOR_MODELS_PATH] [--no-half-controlnet]
[--controlnet-preprocessor-cache-size CONTROLNET_PREPROCESSOR_CACHE_SIZE]
[--controlnet-loglevel {DEBUG,INFO,WARNING,ERROR,CRITICAL}] [--controlnet-tracemalloc]
[--ldsr-models-path LDSR_MODELS_PATH] [--lora-dir LORA_DIR]
[--lyco-dir-backcompat LYCO_DIR_BACKCOMPAT] [--scunet-models-path SCUNET_MODELS_PATH]
[--swinir-models-path SWINIR_MODELS_PATH]
launch.py: error: unrecognized arguments: --use-zluda --debug

@yacinesh
Copy link
Author

i've tried a fresh install but i get this error

no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Launching Web UI with arguments:
Style database not found: C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\styles.csv
Traceback (most recent call last):
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\launch.py", line 48, in
main()
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\launch.py", line 44, in main
start()
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\modules\launch_utils.py", line 677, in start
import webui
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\webui.py", line 13, in
initialize.imports()
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\modules\initialize.py", line 34, in imports
shared_init.initialize()
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\modules\shared_init.py", line 58, in initialize
initialize_onnx()
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\modules\onnx_impl_init_.py", line 233, in initialize
from .pipelines.onnx_stable_diffusion_img2img_pipeline import OnnxStableDiffusionImg2ImgPipeline
File "C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\modules\onnx_impl\pipelines\onnx_stable_diffusion_img2img_pipeline.py", line 8, in
from diffusers.image_processor import VaeImageProcessor, PipelineImageInput
ImportError: cannot import name 'PipelineImageInput' from 'diffusers.image_processor' (C:\a1111\Nouveau dossier (2)\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\image_processor.py)
Appuyez sur une touche pour continuer...

@lshqqytiger
Copy link
Owner

lshqqytiger commented Feb 26, 2024

I recommend to use SD.Next instead of stable-diffusion-webui(-directml) with ZLUDA.
But if you want, follow ZLUDA installation guide of SD.Next in moderation and run stable-diffusion-webui after disabling PyTorch cuDNN backend.
You should also replace dll files in venv/Lib/site-packages/torch/lib with ZLUDA's dll files because the launcher/installer of stable-diffusion-webui does not officially support ZLUDA.

@yacinesh
Copy link
Author

@lshqqytiger
how to disable PyTorch cuDNN backend ?

@lshqqytiger
Copy link
Owner

Add this line somewhere. shared_init.py will be appropriate.

torch.backends.cudnn.enabled = False

@lshqqytiger
Copy link
Owner

#191 (comment)

@lshqqytiger lshqqytiger added enhancement New feature or request question Further information is requested zluda About ZLUDA labels Feb 27, 2024
@yacinesh
Copy link
Author

wow it took just a 14s using a sd1.5 model
@lshqqytiger is that mean that i'm using zluda?

Launching Web UI with arguments: --use-zluda
C:\a1111\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
torch.utils._pytree._register_pytree_node(
ONNX: selected=CUDAExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']
Uninstalling modified gradio for canvas-zoom
Checkpoint realvisxlV40Turbo_v30TurboBakedvae.ckpt [380261d390] not found; loading fallback epicphotogasm_z-inpainting.safetensors
Calculating sha256 for C:\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\epicphotogasm_z-inpainting.safetensors: Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 9.9s (prepare environment: 10.7s, initialize shared: 1.6s, load scripts: 4.6s, create ui: 0.6s, gradio launch: 0.3s).
eb35e334316ec93e94619a0dfa81b2d726668adb092042fd0be0800dca8fd838
Loading weights [eb35e33431] from C:\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\epicphotogasm_z-inpainting.safetensors
Creating model from config: C:\a1111\stable-diffusion-webui-directml\configs\v1-inpainting-inference.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 6.9s (calculate hash: 3.2s, load weights from disk: 0.2s, create model: 0.4s, apply weights to model: 1.9s, load textual inversion embeddings: 0.4s, calculate empty prompt: 0.5s).
Reusing loaded model epicphotogasm_z-inpainting.safetensors [eb35e33431] to load realisticVisionV60B1_v60B1VAE.safetensors
Calculating sha256 for C:\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\realisticVisionV60B1_v60B1VAE.safetensors: e590cd1534b8bfd34ef2a665b108d5a0c351e2befee798316cbf688d01991db4
Loading weights [e590cd1534] from C:\a1111\stable-diffusion-webui-directml\models\Stable-diffusion\realisticVisionV60B1_v60B1VAE.safetensors
Creating model from config: C:\a1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
Applying attention optimization: Doggettx... done.
Model loaded in 3.8s (create model: 0.3s, apply weights to model: 3.2s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:14<00:00, 1.39it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00, 1.46it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 20/20 [00:13<00:00, 1.50it/s]

@lshqqytiger
Copy link
Owner

Maybe? You can use system info extension to be sure.

@yacinesh
Copy link
Author

@lshqqytiger how to do so ?

@lshqqytiger
Copy link
Owner

https://github.com/vladmandic/sd-extension-system-info
You can see device name using system info extension. You are on ZLUDA if it ends with [ZLUDA].

@yacinesh
Copy link
Author

@lshqqytiger Thank you, programmers, is there any arg to add beside --use-zluda ?
image

@lshqqytiger
Copy link
Owner

No. Most of arguments are moved to settings.

@yacinesh
Copy link
Author

@lshqqytiger i hope you keep support this project, will you?

@lshqqytiger
Copy link
Owner

I will. But it is hard to add advanced features to A1111 webui. I recommend SD.Next for who wants active development, faster generation speed, and features.

@giangminh
Copy link

giangminh commented Mar 11, 2024

@lshqqytiger Hello, I've installed, but had some errors, can you help me please, now i'm using amd gpu
image

@lshqqytiger
Copy link
Owner

Download and unzip ZLUDA v3.5-win from my fork. And add it to Path.

@DOUBLEXX666
Copy link

im trying webu-amdgpu with zluda, it launches fine but uses my cpu instead of gpu?

@lshqqytiger
Copy link
Owner

lshqqytiger commented Jul 12, 2024

im trying webu-amdgpu with zluda, it launches fine but uses my cpu instead of gpu?

Need console log.

@DOUBLEXX666
Copy link

im trying webu-amdgpu with zluda, it launches fine but uses my cpu instead of gpu?

Need console log.

Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: v1.9.3-amd-32-gcaac83d4d
Commit hash: caac83d4dcba41d907e8ce8dff1423592d594ac4
Using ZLUDA in D:\Stable\stable-diffusion-webui-directml.zluda
Installing sd-webui-controlnet requirement: changing opencv-python version from 4.10.0.84 to 4.8.0
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
rank_zero_deprecation(
Launching Web UI with arguments: --use-zluda --autolaunch
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\vq_model.py:20: FutureWarning: VQEncoderOutput is deprecated and will be removed in version 0.31. Importing VQEncoderOutput from diffusers.models.vq_model is deprecated and this will be removed in a future version. Please use from diffusers.models.autoencoders.vq_model import VQEncoderOutput, instead.
deprecate("VQEncoderOutput", "0.31", deprecation_message)
D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\models\vq_model.py:25: FutureWarning: VQModel is deprecated and will be removed in version 0.31. Importing VQModel from diffusers.models.vq_model is deprecated and this will be removed in a future version. Please use from diffusers.models.autoencoders.vq_model import VQModel, instead.
deprecate("VQModel", "0.31", deprecation_message)
ONNX: version=1.18.1 provider=DmlExecutionProvider, available=['AzureExecutionProvider', 'CPUExecutionProvider']

@lshqqytiger
Copy link
Owner

Remove venv folder and try again

@DOUBLEXX666
Copy link

i did it before and i get

Traceback (most recent call last):
File "D:\Stable\stable-diffusion-webui-directml\launch.py", line 48, in
main()
File "D:\Stable\stable-diffusion-webui-directml\launch.py", line 39, in main
prepare_environment()
File "D:\Stable\stable-diffusion-webui-directml\modules\launch_utils.py", line 669, in prepare_environment
from modules import devices
File "D:\Stable\stable-diffusion-webui-directml\modules\devices.py", line 6, in
from modules import errors, shared, npu_specific
File "D:\Stable\stable-diffusion-webui-directml\modules\shared.py", line 4, in
import gradio as gr
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\gradio_init_.py", line 3, in
import gradio.components as components
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\components_init_.py", line 1, in
from gradio.components.annotated_image import AnnotatedImage
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\gradio\components\annotated_image.py", line 8, in
from gradio_client.documentation import document, set_documentation_group
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\gradio_client_init_.py", line 1, in
from gradio_client.client import Client
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\gradio_client\client.py", line 24, in
from huggingface_hub import CommitOperationAdd, SpaceHardware, SpaceStage
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub_init_.py", line 503, in getattr
submod = importlib.import_module(submod_path)
File "C:\Users\diego\AppData\Local\Programs\Python\Python310\lib\importlib_init_.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "D:\Stable\stable-diffusion-webui-directml\venv\lib\site-packages\huggingface_hub\hf_api.py", line 47, in
from tqdm.auto import tqdm as base_tqdm
ModuleNotFoundError: No module named 'tqdm.auto'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested zluda About ZLUDA
Projects
None yet
Development

No branches or pull requests

4 participants