You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
upon loading I first had to activate "trust-remote-code" and it crashed with "can't create directory /home/app" I suppose this is an issue by the author of the LLM, though. I fixed it with manually creating a folder under /home/app and set it to rw for the user for now.
The next problem is, that I get a type error with: TypeError: object of type 'int' has no len() in the shared tokenizer. I provide the full log below.
Is there an existing issue for this?
I have searched the existing issues
Reproduction
in text-gen ui go to the models tab.
on the download field on the right insert "openGPT-X/Teuken-7B-instruct-research-v0.4" and then press download.
wait for the download to be finished.
I had to set "load in 4-bit" and "trust remote code"
go to the chat tab set to instruct and write something.
the model will crash
Screenshot
Logs
Traceback (most recent call last):
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 566, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 261, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1786, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1350, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 583, in async_iteration
return await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 576, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2441, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 943, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 559, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 742, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/modules/chat.py", line 436, in generate_chat_reply_wrapper
fori, historyin enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
File "/media/AI/text-generation-webui/modules/chat.py", line 403, in generate_chat_reply
forhistoryin chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
File "/media/AI/text-generation-webui/modules/chat.py", line 348, in chatbot_wrapper
prompt = generate_chat_prompt(text, state, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/modules/chat.py", line 200, in generate_chat_prompt
encoded_length = get_encoded_length(prompt)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/modules/text_generation.py", line 189, in get_encoded_length
return len(encode(prompt)[0])
^^^^^^^^^^^^^^
File "/media/AI/text-generation-webui/modules/text_generation.py", line 144, in encode
if (len(input_ids[0]) > 0 and input_ids[0][0] != shared.tokenizer.bos_token_id) or len(input_ids[0]) == 0:
^^^^^^^^^^^^^^^^^
TypeError: object of type'int' has no len()
System Info
Pop!_OS 24 (Linux, Debian based)
NVIDIA RTX3060
The text was updated successfully, but these errors were encountered:
Describe the bug
I wanted to try the new OpenGPT Model by the EU:
https://huggingface.co/openGPT-X/Teuken-7B-instruct-research-v0.4
upon loading I first had to activate "trust-remote-code" and it crashed with "can't create directory /home/app" I suppose this is an issue by the author of the LLM, though. I fixed it with manually creating a folder under /home/app and set it to rw for the user for now.
The next problem is, that I get a type error with: TypeError: object of type 'int' has no len() in the shared tokenizer. I provide the full log below.
Is there an existing issue for this?
Reproduction
in text-gen ui go to the models tab.
on the download field on the right insert "openGPT-X/Teuken-7B-instruct-research-v0.4" and then press download.
wait for the download to be finished.
I had to set "load in 4-bit" and "trust remote code"
go to the chat tab set to instruct and write something.
the model will crash
Screenshot
Logs
System Info
Pop!_OS 24 (Linux, Debian based) NVIDIA RTX3060
The text was updated successfully, but these errors were encountered: