Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AMD] [Linux] AttributeError: 'NoneType' object has no attribute 'lower' #6709

Open
1 task done
prmbittencourt opened this issue Jan 28, 2025 · 4 comments
Open
1 task done
Labels
bug Something isn't working

Comments

@prmbittencourt
Copy link

prmbittencourt commented Jan 28, 2025

Describe the bug

I have just installed text-generation-webui last night and most of my models fail to load. Below is the terminal output from attempting to load four different models in a row. As you can see, they all get the same error.

I have not touched any plugins or settings. All I did was clone the Git repo, run start_linux.sh, select AMD when prompted and attempt to load the models via the web interface.

More info:

  • These are all GGUF models. I have not gotten around to testing any GPTQ models yet.
  • At least one of them is known to have worked on the GPT4All AppImage.
  • All my models are symlinked from a faster SSD. I do not believe this is related to the issue because one other model did load successfully (its replies were weird but that's a separate issue and may be the model's fault).

I installed sentence_transformers to the local conda environment with pip as described in this issue, but it did not fix my problem.

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

  1. Clone the GitHub repo to a local directory.
  2. Run start_linux.sh from that directory.
  3. Select the option for AMD GPUs.
  4. Open the web interface.
  5. Quit the program because you forgot to put your models in the right place.
  6. Symlink the models to the correct place.
  7. Attempt to load a model.
  8. It fails with the error AttributeError: 'NoneType' object has no attribute 'lower'

Screenshot

Image

Logs

❯ ./start_linux.sh
11:12:05-425595 INFO     Starting Text generation web UI                                                                

Running on local URL:  http://127.0.0.1:7860

Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
11:12:19-207868 INFO     Loading "Chocolatine-14B-Instruct-DPO-v1.2.i1-Q3_K_S.gguf"                                     
11:12:19-209411 ERROR    The path to the model does not exist. Exiting.                                                 
11:12:19-210137 ERROR    Failed to load the model.                                                                      
Traceback (most recent call last):
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/models.py", line 86, in load_model
    raise ValueError
ValueError

Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
11:12:57-992110 INFO     Loading "internlm2_5-20b-chat-q4_0.gguf"                                                       
11:12:57-993564 ERROR    The path to the model does not exist. Exiting.                                                 
11:12:57-994358 ERROR    Failed to load the model.                                                                      
Traceback (most recent call last):
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/models.py", line 86, in load_model
    raise ValueError
ValueError

Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
11:13:02-514683 INFO     Loading "mistral-7b-openorca.Q4_0.gguf"                                                        
11:13:02-516271 ERROR    The path to the model does not exist. Exiting.                                                 
11:13:02-517026 ERROR    Failed to load the model.                                                                      
Traceback (most recent call last):
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/models.py", line 86, in load_model
    raise ValueError
ValueError

Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'
11:13:09-346161 INFO     Loading "supernova-lite-v1.Q4_K_S.gguf"                                                        
11:13:09-347924 ERROR    The path to the model does not exist. Exiting.                                                 
11:13:09-348519 ERROR    Failed to load the model.                                                                      
Traceback (most recent call last):
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/models.py", line 86, in load_model
    raise ValueError
ValueError

Traceback (most recent call last):
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 541, in process_events
    response = await route_utils.call_process_api(
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
    output = await app.get_blocks().process_api(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
    result = await self.call_function(
             ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1514, in call_function
    prediction = await anyio.to_thread.run_sync(
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2461, in run_sync_in_worker_thread
    return await future
           ^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 962, in run
    result = context.run(func, *args)
             ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 833, in wrapper
    response = f(*args, **kwargs)
               ^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 334, in handle_load_model_event_final
    truncation_length = update_truncation_length(truncation_length, state)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/text-generation-webui/modules/ui_model_menu.py", line 318, in update_truncation_length
    if state['loader'].lower().startswith('exllama'):
       ^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'NoneType' object has no attribute 'lower'

System Info

Operating System: EndeavourOS 
KDE Plasma Version: 6.2.5
KDE Frameworks Version: 6.10.0
Qt Version: 6.8.1
Kernel Version: 6.12.10-zen1-1-zen (64-bit)
Graphics Platform: Wayland
Processors: 16 × AMD Ryzen 7 5700X 8-Core Processor
Memory: 31,3 GiB of RAM
Graphics Processor: AMD Radeon RX 6700 XT
Product Name: X570 Phantom Gaming 4
@prmbittencourt prmbittencourt added the bug Something isn't working label Jan 28, 2025
@jensgreven
Copy link

First of all you could try to run a GGUF model without GPU support, just CPU and see if that works (to rule out the symlink issue).
Also make sure you have the proper version of ROCm installed (AFAIR that should be ROCm 6.1 which needs a kernel version 6.7 at max from what I remember when I tried to get my AMD GPU running, so I used Ubuntu 20.04 or so and made sure the kernel did not get updated).

From my experience getting TGWUI running on an AMD GPU was painful and the result was fragile at best. It worked one day, had to reinstall, stopped working the next day (I think an update in llama.cpp might have been the culprit).

You might also try atinoda/text-generation-webui-docker in a docker container (which is what I use now). Seems to be much less fragile and more rubust.

@prmbittencourt
Copy link
Author

I try to avoid Docker whenever possible. I dislike it because it's very cumbersome to make small changes (like installing a pip package, for example).

I tried what you said and it worked. It's not the best in terms of organization but I'll try moving the TGWUI directory to the same SSD as my models so I can hard link them and see if I encounter any further issues. Thank you.

@jensgreven
Copy link

Glad it worked :-)

I agree, if you want to (or need to) fiddle around with the installation Docker is not a comfortable way to go. It just works for me, because I am usually happy with the app as it is provided. Also because it was a reason to learn "how to Docker" for me personally.

You speak of hardlinking... Maybe just providing the parameter (I hope I remember the right name) "--model-dir /path/to/my/models" in CMD_FLAGS.txt could do the trick for you. That might also rid you of the need to have the models in the same SSD as your TGWUI install.

@prmbittencourt
Copy link
Author

The --model-dir option worked! Thanks again. Oddly, I did not find a list of all the options that can be passed to the start_linux.sh script anywhere in the documentation, nor is this particular one listed when running the script with the --help option.

GPTQ models are not loading, but that is a separate issue and there are a few things I can try before bothering the nice people here for help. This issue can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants