-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[AMD] [Linux] AttributeError: 'NoneType' object has no attribute 'lower' #6709
Comments
First of all you could try to run a GGUF model without GPU support, just CPU and see if that works (to rule out the symlink issue). From my experience getting TGWUI running on an AMD GPU was painful and the result was fragile at best. It worked one day, had to reinstall, stopped working the next day (I think an update in llama.cpp might have been the culprit). You might also try atinoda/text-generation-webui-docker in a docker container (which is what I use now). Seems to be much less fragile and more rubust. |
I try to avoid Docker whenever possible. I dislike it because it's very cumbersome to make small changes (like installing a pip package, for example). I tried what you said and it worked. It's not the best in terms of organization but I'll try moving the TGWUI directory to the same SSD as my models so I can hard link them and see if I encounter any further issues. Thank you. |
Glad it worked :-) I agree, if you want to (or need to) fiddle around with the installation Docker is not a comfortable way to go. It just works for me, because I am usually happy with the app as it is provided. Also because it was a reason to learn "how to Docker" for me personally. You speak of hardlinking... Maybe just providing the parameter (I hope I remember the right name) "--model-dir /path/to/my/models" in CMD_FLAGS.txt could do the trick for you. That might also rid you of the need to have the models in the same SSD as your TGWUI install. |
The --model-dir option worked! Thanks again. Oddly, I did not find a list of all the options that can be passed to the start_linux.sh script anywhere in the documentation, nor is this particular one listed when running the script with the --help option. GPTQ models are not loading, but that is a separate issue and there are a few things I can try before bothering the nice people here for help. This issue can be closed. |
Describe the bug
I have just installed text-generation-webui last night and most of my models fail to load. Below is the terminal output from attempting to load four different models in a row. As you can see, they all get the same error.
I have not touched any plugins or settings. All I did was clone the Git repo, run start_linux.sh, select AMD when prompted and attempt to load the models via the web interface.
More info:
I installed sentence_transformers to the local conda environment with pip as described in this issue, but it did not fix my problem.
Is there an existing issue for this?
Reproduction
Screenshot
Logs
System Info
The text was updated successfully, but these errors were encountered: