Skip to content

Commit

Permalink
fix recursion error
Browse files Browse the repository at this point in the history
  • Loading branch information
lshqqytiger committed Jun 1, 2024
1 parent d23a172 commit 2c29feb
Showing 1 changed file with 4 additions and 3 deletions.
7 changes: 4 additions & 3 deletions modules/zluda.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
from torch._prims_common import DeviceLikeType
import onnxruntime as ort
from modules import shared, devices
from modules.onnx_impl.execution_providers import available_execution_providers, ExecutionProvider


PLATFORM = sys.platform
Expand Down Expand Up @@ -60,10 +61,10 @@ def initialize_zluda():
torch.backends.cuda.enable_cudnn_sdp = do_nothing

# ONNX Runtime is not supported
ort.capi._pybind_state.get_available_providers = lambda: [v for v in ort.get_available_providers() if v != 'CUDAExecutionProvider'] # pylint: disable=protected-access
ort.capi._pybind_state.get_available_providers = lambda: [v for v in available_execution_providers if v != ExecutionProvider.CUDA] # pylint: disable=protected-access
ort.get_available_providers = ort.capi._pybind_state.get_available_providers # pylint: disable=protected-access
if shared.opts.onnx_execution_provider == 'CUDAExecutionProvider':
shared.opts.onnx_execution_provider = 'CPUExecutionProvider'
if shared.opts.onnx_execution_provider == ExecutionProvider.CUDA:
shared.opts.onnx_execution_provider = ExecutionProvider.CPU

devices.device_codeformer = devices.cpu

Expand Down

2 comments on commit 2c29feb

@Roninos
Copy link

@Roninos Roninos commented on 2c29feb Jun 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Still doesn't work, costs onnxruntime for cpu.
From https://github.com/lshqqytiger/stable-diffusion-webui-directml

  • branch master -> FETCH_HEAD
    Already up to date.
    venv "C:\AI\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
    WARNING: ZLUDA works best with SD.Next. Please consider migrating to SD.Next.
    fatal: No names found, cannot describe anything.
    Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
    Version: 1.9.4
    Commit hash: 2c29feb
    Using ZLUDA in C:\AI\stable-diffusion-webui-directml.zluda
    no module 'xformers'. Processing without...
    no module 'xformers'. Processing without...
    No module 'xformers'. Proceeding without it.
    C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: pytorch_lightning.utilities.distributed.rank_zero_only has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from pytorch_lightning.utilities instead.
    rank_zero_deprecation(
    Launching Web UI with arguments: --autolaunch --theme dark --use-zluda
    C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
    torch.utils._pytree._register_pytree_node(
    C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\diffusers\utils\outputs.py:63: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead.
    torch.utils._pytree._register_pytree_node(
    ONNX: version=1.18.0 provider=CPUExecutionProvider, available=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
    ControlNet preprocessor location: C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\annotator\downloads
    2024-06-01 19:42:06,629 - ControlNet - INFO - ControlNet v1.1.449
    Loading weights [1718b5bb2d] from C:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\albedobaseXL_v21.safetensors
    2024-06-01 19:42:07,109 - ControlNet - INFO - ControlNet UI callback registered.
    Creating model from config: C:\AI\stable-diffusion-webui-directml\repositories\generative-models\configs\inference\sd_xl_base.yaml
    Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch().
Startup time: 22.3s (prepare environment: 19.2s, initialize shared: 6.3s, other imports: 0.1s, load scripts: 1.7s, create ui: 1.4s, gradio launch: 2.7s).
Applying attention optimization: Doggettx... done.
Model loaded in 9.6s (load weights from disk: 0.6s, create model: 1.2s, apply weights to model: 6.0s, move model to device: 0.5s, load textual inversion embeddings: 0.3s, calculate empty prompt: 0.8s).
Reusing loaded model albedobaseXL_v21.safetensors [1718b5bb2d] to load juggernautXL_v8Rundiffusion.safetensors [aeb7e9e689]
Loading weights [aeb7e9e689] from C:\AI\stable-diffusion-webui-directml\models\Stable-diffusion\juggernautXL_v8Rundiffusion.safetensors
Applying attention optimization: Doggettx... done.
Weights loaded in 9.7s (send model to cpu: 3.7s, load weights from disk: 0.3s, apply weights to model: 3.9s, move model to device: 1.7s).
2024-06-01 19:43:22,059 - ControlNet - INFO - unit_separate = False, style_align = False
2024-06-01 19:43:22,291 - ControlNet - INFO - Loading model: ip-adapter_instant_id_sdxl [eb2d3ec0]
2024-06-01 19:43:23,618 - ControlNet - INFO - Loaded state_dict from [C:\AI\stable-diffusion-webui-directml\models\ControlNet\ip-adapter_instant_id_sdxl.bin]
2024-06-01 19:43:26,518 - ControlNet - INFO - ControlNet model ip-adapter_instant_id_sdxl eb2d3ec0 loaded.
2024-06-01 19:43:26,531 - ControlNet - INFO - Using preprocessor: instant_id_face_embedding
2024-06-01 19:43:26,532 - ControlNet - INFO - preprocessor resolution = 512
2024-06-01 19:43:27.0390188 [E:onnxruntime:, inference_session.cc:2045 onnxruntime::InferenceSession::Initialize::<lambda_d4e0caa0782683b2ee97e3859f73dc9c>::operator ()] Exception during initialization: C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=R2D1OS-PC ; file=C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle_, stream);

*** Error running process: C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py
Traceback (most recent call last):
File "C:\AI\stable-diffusion-webui-directml\modules\scripts.py", line 825, in process
script.process(p, script_args)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1222, in process
self.controlnet_hack(p)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 1207, in controlnet_hack
self.controlnet_main_entry(p)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 941, in controlnet_main_entry
controls, hr_controls, additional_maps = get_control(
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 290, in get_control
controls, hr_controls = list(zip(
[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 290, in
controls, hr_controls = list(zip(*[preprocess_input_image(img) for img in optional_tqdm(input_images)]))
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\controlnet.py", line 242, in preprocess_input_image
result = preprocessor.cached_call(
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 196, in cached_call
result = self._cached_call(input_image, *args, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 82, in decorated_func
return cached_func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\utils.py", line 66, in cached_func
return func(*args, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\supported_preprocessor.py", line 209, in _cached_call
return self(*args, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\legacy_preprocessors.py", line 105, in call
result, is_image = self.call_function(
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 725, in run_model_instant_id
self.load_model()
File "C:\AI\stable-diffusion-webui-directml\extensions\sd-webui-controlnet\scripts\preprocessor\legacy\processor.py", line 669, in load_model
self.model = FaceAnalysis(
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\insightface\app\face_analysis.py", line 31, in init
model = model_zoo.get_model(onnx_file, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 96, in get_model
model = router.get_model(providers=providers, provider_options=provider_options)
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 40, in get_model
session = PickableInferenceSession(self.onnx_file, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\insightface\model_zoo\model_zoo.py", line 25, in init
super().init(model_path, **kwargs)
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\AI\stable-diffusion-webui-directml\venv\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 483, in create_inference_session
sess.initialize_session(providers, provider_options, disabled_optimizers)
onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:123 onnxruntime::CudaCall C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_call.cc:116 onnxruntime::CudaCall CUDNN failure 4: CUDNN_STATUS_INTERNAL_ERROR ; GPU=0 ; hostname=R2D1OS-PC ; file=C:\a_work\1\s\onnxruntime\core\providers\cuda\cuda_execution_provider.cc ; line=182 ; expr=cudnnSetStream(cudnn_handle
, stream);

@Roninos
Copy link

@Roninos Roninos commented on 2c29feb Jun 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I restarted the computer and instant-id worked, didn't check the rest.

Please sign in to comment.