You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The issue is caused by an extension, but I believe it is caused by a bug in the webui
The issue exists in the current version of the webui
The issue has not been reported before recently
The issue has been reported before but has not been fixed yet
What happened?
Following the AMD and Arch Linux instructions in the Wiki, I get:
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.4
Python 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Launching Web UI with arguments: --skip-torch-cuda-test --no-half
Unable to find TSan function AnnotateHappensAfter.
Unable to find TSan function AnnotateHappensBefore.
Unable to find TSan function AnnotateIgnoreWritesBegin.
Unable to find TSan function AnnotateIgnoreWritesEnd.
Unable to find TSan function AnnotateNewMemory.
Unable to find TSan function __tsan_func_entry.
Unable to find TSan function __tsan_func_exit.
Warning: please export TSAN_OPTIONS='ignore_noninstrumented_modules=1' to avoid false positive reports from the OpenMP runtime!
[atlas:365908:0:365908] Caught signal 11 (Segmentation fault: address not mapped to object at address (nil))
==== backtrace (tid: 365908) ====
0 0x000000000003c770 __sigaction() ???:0
=================================
./webui.sh: line 297: 365908 Segmentation fault (core dumped) "${python_cmd}" -u "${LAUNCH_SCRIPT}" "$@"
With python launch.py --precision full --no-half --skip-torch-cuda-test I get:
python launch.py --precision full --no-half --skip-torch-cuda-test
Python 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Launching Web UI with arguments: --precision full --no-half --skip-torch-cuda-test
python: /usr/src/debug/hip-runtime-amd/clr-rocm-6.0.2/hipamd/src/hip_code_object.cpp:762: hip::FatBinaryInfo** hip::StatCO::addFatBinary(const void*, bool): Assertion `err == hipSuccess' failed.
zsh: IOT instruction (core dumped) python launch.py --precision full --no-half --skip-torch-cuda-test
When the virtual env is created without --system-site-packages flag, the models are loaded and I am able to access the web UI. When I go to generate images, The CPU usage goes to 56% while the GPU sits at an idle 6%. It takes about 50-60 seconds to finish generating one 512x512 image.
./webui.sh --skip-torch-cuda-test --no-half
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye), Fedora 34+ and openSUSE Leap 15.4 or newer.
################################################################################################################################
Running on salvaje user
################################################################################################################################
Repo already cloned, using it as install directory
################################################################################################################################
python venv already activate or run without venv: /home/salvaje/.virtualenvs/stable-diffusion-webui-xxxg
################################################################################################################################
Launching launch.py...
################################################################
glibc version is 2.39
Check TCMalloc: libtcmalloc_minimal.so.4
libtcmalloc_minimal.so.4 is linked with libc.so,execute LD_PRELOAD=/usr/lib/libtcmalloc_minimal.so.4
Python 3.11.8 (main, Feb 12 2024, 14:50:05) [GCC 13.2.1 20230801]
Version: v1.9.0
Commit hash: adadb4e3c7382bf3e4f7519126cd6c70f4f8557b
Installing clip
Installing requirements
Launching Web UI with arguments: --skip-torch-cuda-test --no-half
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx', memory monitor disabled
Loading weights [6ce0161689] from /home/salvaje/ai/stable-diffusion-webui/models/Stable-diffusion/v1-5-pruned-emaonly.safetensors
Creating model from config: /home/salvaje/ai/stable-diffusion-webui/configs/v1-inference.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set`share=True`in`launch()`.Startup time: 18.2s (prepare environment: 15.3s, import torch: 1.3s, import gradio: 0.3s, setup paths: 0.4s, other imports: 0.2s, load scripts: 0.1s, create ui: 0.3s, gradio launch: 0.2s)./usr/bin/xdg-open: line 1045: x-www-browser: command not foundApplying attention optimization: InvokeAI... done.Model loaded in 1.0s (create model: 0.4s, apply weights to model: 0.5s).100%|███████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:49<00:00, 2.49s/it]Total progress: 100%|███████████████████████████████████████████████████████████████████████████████| 20/20 [00:49<00:00, 2.50s/it]Total progress: 100%|███████████████████████████████████████████████████████████████████████████████| 20/20 [00:49<00:00, 2.52s/it]
Additional information
I looked at this issue #15432 which was closed with issue lshqqytiger#433 being marked as providing a solution, but none of the options mentioned there have worked. I am on Linux, so directml is not an option.
Thanks
The text was updated successfully, but these errors were encountered:
I am no Developer but, the 7800 XT has 16GB VRAM right? I get it running after setting --medvram and --medvram-sdxl on my 6800 XT with 16GB VRAM. If i do it without it i cannot run it with my GPU.
I changed my python version from 3.12 to 3.10 and it kind of solved the issue. Rocm compilation can be a problem, but other than that, it works now. I did not need the --medvram flag.
Checklist
What happened?
Following the AMD and Arch Linux instructions in the Wiki, I get:
With
python launch.py --precision full --no-half --skip-torch-cuda-test
I get:When the virtual env is created without
--system-site-packages
flag, the models are loaded and I am able to access the web UI. When I go to generate images, The CPU usage goes to 56% while the GPU sits at an idle 6%. It takes about 50-60 seconds to finish generating one 512x512 image.Steps to reproduce the problem
python-pytorch-opt-rocm
--system-site-packages
flagWhat should have happened?
The GPU should be the one doing the work, and image generation should take less time if the hardware is being fully utilsed.
What browsers do you use to access the UI ?
Mozilla Firefox
Sysinfo
Console logs
Additional information
I looked at this issue #15432 which was closed with issue lshqqytiger#433 being marked as providing a solution, but none of the options mentioned there have worked. I am on Linux, so directml is not an option.
Thanks
The text was updated successfully, but these errors were encountered: