-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: use_directml works, same PC, use_zluda run error #418
Comments
You can't skip cuda test with ZLUDA, and also you can't use the virtual environment that used with DirectML. Remove or rename (if you want to backup) venv folder and try again. |
follow you suggestion, rename venv to venv_directML and process .\webui-user.bat again, error:Failed to automatically patch torch with ZLUDA. Could not find ZLUDA from PATH. now set COMMANDLINE_ARGS=--use-zluda --opt-sub-quad-attention --lowvram --disable-nan-check Collecting mpmath>=0.19 [notice] A new release of pip available: 22.2.1 -> 24.0 |
#385 (comment) |
Add HIP SDK and zluda directory to Path, this step i do not know how to do, could teach me, thanks |
path has add success. and reboot PC, WEBUI ok, another error like this : is it matter with my AMD6500XT? PS D:\GitResource\stable-diffusion-webui-directml> .\webui-user.bat To create a public link, set Stable diffusion model failed to load |
RX 6500 XT is gfx1034, which is officially not supported by HIP SDK. But you can build required files yourself. And there are some people who already did it: https://github.com/brknsoul/ROCmLibs but it does not contain gfx1034 one. Related: https://www.reddit.com/r/StableDiffusion/comments/1asewch/installing_zluda_for_amd_gpus_in_windows_ie_use/ |
can you help me to download Tensile-fix-fallback-arch-build.patch, i do not have right to get it |
|
Previously running alright. Now having the same problem as LYC878484 |
Checklist
What happened?
change set COMMANDLINE_ARGS=--use-directml --no-half --precision full --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check to set COMMANDLINE_ARGS=--use-zluda --no-half --precision full --skip-torch-cuda-test --opt-sub-quad-attention --lowvram --disable-nan-check,
Steps to reproduce the problem
1、change COMMANDLINE_ARGS
2、process "webui-user.bat", and set parameters on WEBUI
3、generate picture,error: RuntimeError: "log_vml_cpu" not implemented for 'Half' Time taken: 0.8 sec.
What should have happened?
use_zluda and use_directml can use on same environment,only need to change COMMANDLINE_ARGS
What browsers do you use to access the UI ?
Google Chrome
Sysinfo
sysinfo-2024-03-17-11-50.json
Console logs
Additional information
my environment is windows10 + inter 12400F CPU + AMD6500XT
The text was updated successfully, but these errors were encountered: