Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting this error on automatic webui on colab #33

Closed
gilroff opened this issue Aug 6, 2023 · 4 comments · Fixed by #37
Closed

Getting this error on automatic webui on colab #33

gilroff opened this issue Aug 6, 2023 · 4 comments · Fixed by #37

Comments

@gilroff
Copy link

gilroff commented Aug 6, 2023

Describe the bug
I get this error on webui on colab and it doesn't work:

85% 17/20 [00:05<00:00, 3.98it/s]
90% 18/20 [00:06<00:00, 4.11it/s]
95% 19/20 [00:06<00:00, 3.96it/s]
100% 20/20 [00:06<00:00, 3.01it/s]
2023-08-06 08:47:15,216 - FaceSwapLab - INFO - Try to use model : /content/sdw/models/faceswaplab/inswapper_128.onnx
2023-08-06 08:47:15,272 - FaceSwapLab - INFO - Load analysis model, will take some time. (> 30s)
Loading analysis model (first time is slow): 100% 1/1 [00:08<00:00, 8.49s/model]
2023-08-06 08:47:23,760 - FaceSwapLab - INFO - ("Applied providers: ['CPUExecutionProvider'], with options: "
"{'CPUExecutionProvider': {}}\n"
'find model: '
'/content/sdw/models/faceswaplab/analysers/models/buffalo_l/1k3d68.onnx '
"landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0\n"
"Applied providers: ['CPUExecutionProvider'], with options: "
"{'CPUExecutionProvider': {}}\n"
'find model: '
'/content/sdw/models/faceswaplab/analysers/models/buffalo_l/2d106det.onnx '
"landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0\n"
"Applied providers: ['CPUExecutionProvider'], with options: "
"{'CPUExecutionProvider': {}}\n"
'find model: '
'/content/sdw/models/faceswaplab/analysers/models/buffalo_l/det_10g.onnx '
"detection [1, 3, '?', '?'] 127.5 128.0\n"
"Applied providers: ['CPUExecutionProvider'], with options: "
"{'CPUExecutionProvider': {}}\n"
'find model: '
'/content/sdw/models/faceswaplab/analysers/models/buffalo_l/genderage.onnx '
"genderage ['None', 3, 96, 96] 0.0 1.0\n"
"Applied providers: ['CPUExecutionProvider'], with options: "
"{'CPUExecutionProvider': {}}\n"
'find model: '
'/content/sdw/models/faceswaplab/analysers/models/buffalo_l/w600k_r50.onnx '
"recognition ['None', 3, 112, 112] 127.5 127.5\n")
2023-08-06 08:47:23,761 - FaceSwapLab - ERROR - Failed to swap face in postprocess method : This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
Traceback (most recent call last):
File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab.py", line 178, in postprocess
swapped_images = swapper.process_images_units(
File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 777, in process_images_units
swapped = process_image_unit(model, units[0], image, info, force_blend)
File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 650, in process_image_unit
faces = get_faces(pil_to_cv2(image))
File "/content/sdw/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 372, in get_faces
face_analyser = copy.deepcopy(getAnalysisModel())
File "/usr/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/usr/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/lib/python3.10/copy.py", line 271, in _reconstruct
state = deepcopy(state, memo)
File "/usr/lib/python3.10/copy.py", line 146, in deepcopy
y = copier(x, memo)
File "/usr/lib/python3.10/copy.py", line 231, in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
File "/usr/lib/python3.10/copy.py", line 172, in deepcopy
y = _reconstruct(x, memo, *rv)
File "/usr/lib/python3.10/copy.py", line 273, in _reconstruct
y.setstate(state)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 33, in setstate
self.init(model_path)
File "/usr/local/lib/python3.10/dist-packages/insightface/model_zoo/model_zoo.py", line 25, in init
super().init(model_path, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 347, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "/usr/local/lib/python3.10/dist-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 375, in _create_inference_session
raise ValueError(
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

Desktop (please complete the following information):

  • google colab free tier Nvidia Tesla T4
@glucauze
Copy link
Owner

glucauze commented Aug 6, 2023

Yes, it's probably due to a not working optimization. 1.2.1 should fix it.

@gilroff
Copy link
Author

gilroff commented Aug 6, 2023

Oh all good then. thanks for the extension :)

@sujancok23
Copy link

sujancok23 commented Aug 7, 2023

@glucauze

I have an issue after installing the extension. Tried to update and reinstall, but still same problem.

I'm using colab script from https://github.com/TheLastBen/fast-stable-diffusion

** Error loading script: faceswaplab.py
Traceback (most recent call last):
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/scripts.py", line 319, in load_scripts
script_module = script_loading.load_module(scriptfile.path)
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/modules/script_loading.py", line 10, in load_module
module_spec.loader.exec_module(module)
File "", line 883, in exec_module
File "", line 241, in _call_with_frames_removed
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab.py", line 9, in
from scripts.faceswaplab_api import faceswaplab_api
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_api/faceswaplab_api.py", line 12, in
from scripts.faceswaplab_swapping import swapper
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_swapping/swapper.py", line 31, in
from scripts.faceswaplab_ui.faceswaplab_unit_settings import FaceSwapUnitSettings
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_ui/faceswaplab_unit_settings.py", line 12, in
from scripts.faceswaplab_utils import face_checkpoints_utils
File "/content/gdrive/MyDrive/sd/stable-diffusion-webui/extensions/sd-webui-faceswaplab/scripts/faceswaplab_utils/face_checkpoints_utils.py", line 17, in
import dill as pickle # will be removed in future versions
ModuleNotFoundError: No module named 'dill'

@glucauze
Copy link
Owner

glucauze commented Aug 7, 2023

This shouldn't happen, I think there's an error during installation. You can have a look at the general note on this subject : #36

@glucauze glucauze linked a pull request Aug 7, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants