Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error Expecting value: line 1 column 1 (char 0) #466

Open
1 of 6 tasks
andyhebear opened this issue May 21, 2024 · 3 comments
Open
1 of 6 tasks

[Bug]: Error Expecting value: line 1 column 1 (char 0) #466

andyhebear opened this issue May 21, 2024 · 3 comments

Comments

@andyhebear
Copy link

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

in browser http://127.0.0.1:7860/ intput cat then click Generate
and show error in web Error Expecting value: line 1 column 1 (char 0).
but in console not show error, and then cann't genarate image

Steps to reproduce the problem

in browser http://127.0.0.1:7860/ click Generate
and show error in web Error Expecting value: line 1 column 1 (char 0)

What should have happened?

in browser http://127.0.0.1:7860/ click Generate
and show error in web Error Expecting value: line 1 column 1 (char 0)

What browsers do you use to access the UI ?

No response

Sysinfo

windows 10

Console logs

Found existing installation: torch 2.0.0
Uninstalling torch-2.0.0:
  Successfully uninstalled torch-2.0.0
Found existing installation: torchvision 0.15.1
Uninstalling torchvision-0.15.1:
  Successfully uninstalled torchvision-0.15.1
WARNING: Skipping torchaudio as it is not installed.
WARNING: Skipping torchtext as it is not installed.
WARNING: Skipping functorch as it is not installed.
WARNING: Skipping xformers as it is not installed.
.\venv\Scripts\activate.ps1
Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple/
Requirement already satisfied: torch-directml in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (0.2.0.dev230426)
Collecting torch==2.0.0
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/87/e2/62dbdfc85d3b8f771bc4b1a979ee6a157dbaa8928981dabbf45afc6d13dc/torch-2.0.0-cp310-cp310-win_amd64.whl (172.3 MB)
Collecting torchvision==0.15.1
  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/03/06/6ba7532c66397defffb79f64cac46f812a29b2f87a4ad65a3e95bc164d62/torchvision-0.15.1-cp310-cp310-win_amd64.whl (1.2 MB)
Requirement already satisfied: sympy in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torch==2.0.0->torch-directml) (1.12)
Requirement already satisfied: filelock in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torch==2.0.0->torch-directml) (3.14.0)
Requirement already satisfied: typing-extensions in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torch==2.0.0->torch-directml) (4.11.0)
Requirement already satisfied: jinja2 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torch==2.0.0->torch-directml) (3.1.4)
Requirement already satisfied: networkx in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torch==2.0.0->torch-directml) (3.3)
Requirement already satisfied: pillow!=8.3.*,>=5.3.0 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torchvision==0.15.1->torch-directml) (10.3.0)
Requirement already satisfied: requests in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torchvision==0.15.1->torch-directml) (2.32.1)
Requirement already satisfied: numpy in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from torchvision==0.15.1->torch-directml) (1.26.4)
Requirement already satisfied: MarkupSafe>=2.0 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from jinja2->torch==2.0.0->torch-directml) (2.1.5)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2.2.1)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.3.2)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (2024.2.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from requests->torchvision==0.15.1->torch-directml) (3.7)
Requirement already satisfied: mpmath>=0.19 in c:\users\rains\appdata\local\programs\python\python310\lib\site-packages (from sympy->torch==2.0.0->torch-directml) (1.3.0)
Installing collected packages: torch, torchvision
Successfully installed torch-2.0.0 torchvision-0.15.1

[notice] A new release of pip available: 22.2.1 -> 24.0
[notice] To update, run: python.exe -m pip install --upgrade pip
venv "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug  1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.9.3
Commit hash: <none>
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\pytorch_lightning\utilities\distributed.py:258: LightningDeprecationWarning: `pytorch_lightning.utilities.distributed.rank_zero_only` has been deprecated in v1.8.1 and will be removed in v2.0.0. You can import it from `pytorch_lightning.utilities` instead.
  rank_zero_deprecation(
Launching Web UI with arguments: --skip-torch-cuda-test --use-directml
ONNX: selected=DmlExecutionProvider, available=['DmlExecutionProvider', 'CPUExecutionProvider']
==============================================================================
You are running torch 2.0.0+cpu.
The program is tested to work with torch 2.1.2.
To reinstall the desired version, run with commandline flag --reinstall-torch.
Beware that this will cause a lot of large files to be downloaded, as well as
there are reports of issues with training tab on the latest version.

Use --skip-version-check commandline argument to disable this check.
==============================================================================
Loading weights [7f96a1a9ca] from C:\PhotoShop\AI\SD-Webui-AmdGpu\models\Stable-diffusion\anything-v5-PrtRE.safetensors
Creating model from config: C:\PhotoShop\AI\SD-Webui-AmdGpu\configs\v1-inference.yaml
C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\huggingface_hub\file_download.py:1132: FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use `force_download=True`.
  warnings.warn(
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
Startup time: 4.7s (prepare environment: 6.5s, initialize shared: 1.3s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.2s).
config.json: 100%|████████████████████████████████████████████████████████████████████████| 4.52k/4.52k [00:00<?, ?B/s]
Applying attention optimization: InvokeAI... done.
Model loaded in 5.6s (create model: 4.1s, apply weights to model: 1.1s, calculate empty prompt: 0.2s).

Additional information

No response

@andyhebear
Copy link
Author

webui-user add --no-gradio-queue
and then click Generate show error
To create a public link, set share=True in launch().
Startup time: 3.8s (prepare environment: 5.3s, initialize shared: 0.8s, load scripts: 0.6s, create ui: 0.4s, gradio launch: 0.2s).
Applying attention optimization: InvokeAI... done.
Model loaded in 2.2s (create model: 0.6s, apply weights to model: 1.4s, calculate empty prompt: 0.1s).
5%|████▏ | 1/20 [00:08<02:34, 8.13s/it]
*** Error completing request | 0/20 [00:00<?, ?it/s]
*** Arguments: ('task(t5zuwj0a6wxjabn)', <gradio.routes.Request object at 0x0000017AC5573550>, 'cat', '', [], 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same checkpoint', 'Use same sampler', 'Use same scheduler', '', '', [], 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, False, False, 'positive', 'comma', 0, False, False, 'start', '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\txt2img.py", line 109, in txt2img
processed = processing.process_images(p)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\processing.py", line 847, in process_images
res = process_images_inner(p)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\processing.py", line 1075, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\processing.py", line 1422, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_samplers_kdiffusion.py", line 221, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_samplers_common.py", line 272, in launch_sampling
return func()
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_samplers_kdiffusion.py", line 221, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_samplers_cfg_denoiser.py", line 237, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict(cond_in, image_cond_in))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_utils.py", line 18, in
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_utils.py", line 32, in call
return self.__orig_func(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_unet.py", line 91, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 802, in forward
h = module(h, emb, context)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_optimizations.py", line 393, in split_cross_attention_forward_invokeAI
r = einsum_op(q, k, v)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_optimizations.py", line 367, in einsum_op
return einsum_op_dml(q, k, v)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_optimizations.py", line 354, in einsum_op_dml
return einsum_op_tensor_mem(q, k, v, (mem_reserved - mem_active) if mem_reserved > mem_active else 1)
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_optimizations.py", line 336, in einsum_op_tensor_mem
return einsum_op_slice_1(q, k, v, max(q.shape[1] // div, 1))
File "C:\PhotoShop\AI\SD-Webui-AmdGpu\modules\sd_hijack_optimizations.py", line 308, in einsum_op_slice_1
r[:, i:end] = einsum_op_compvis(q[:, i:end], k, v)
RuntimeError: Could not allocate tensor with 4915840 bytes. There is not enough GPU video memory available!


@andyhebear
Copy link
Author

call webui.bat --skip-torch-cuda-test --use-directml --no-gradio-queue

@CS1o
Copy link

CS1o commented May 27, 2024

Whats your GPU?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants