You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the extension to manage VRAM it fails to load the model.
Is there an existing issue for this?
I have searched the existing issues
Reproduction
Launch the webui. Click manage Vram. prompt for a picture. Connects to SD and generates, then upon trying to reload the LLM it fails and tries to download from the hugginface repo, but the model is None, so it fails.
Screenshot
No response
Logs
Prompting the image generator via the API on http://127.0.0.1:7861...
Requesting Auto1111 to vacate VRAM...
10:27:37-140727 INFO Loading "None"
Traceback (most recent call last):
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 406, in hf_raise_for_status
response.raise_for_status()
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/requests/models.py", line 1024, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/models/None/resolve/main/config.json
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 860, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 967, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1482, in _raise_on_head_call_error
raise head_call_error
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1374, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(
^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1294, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 278, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 302, in _request_wrapper
hf_raise_for_status(response)
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 454, in hf_raise_for_status
raise _format(RepositoryNotFoundError, message, response) from e
huggingface_hub.errors.RepositoryNotFoundError: 401 Client Error. (Request ID: Root=1-676d75e9-3c3e6ec9747cee392881e703;4ae97159-9f33-487c-ac3e-c82ccc152a1e)
Repository Not Found for url: https://huggingface.co/models/None/resolve/main/config.json.
Please make sure you specified the correct `repo_id` and `repo_type`.
If you are trying to access a private or gated repo, make sure you are authenticated.
Invalid username or password.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/queueing.py", line 580, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/route_utils.py", line 276, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1928, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/blocks.py", line 1526, in call_function
prediction = await utils.async_iteration(iterator)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 657, in async_iteration
return await iterator.__anext__()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 650, in __anext__
return await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 2505, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 1005, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 633, in run_sync_iterator_async
return next(iterator)
^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/gradio/utils.py", line 816, in gen_wrapper
response = next(iterator)
^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/modules/chat.py", line 443, in generate_chat_reply_wrapper
fori, historyin enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
File "/home/mjherna/text-generation-webui/modules/chat.py", line 410, in generate_chat_reply
forhistoryin chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
File "/home/mjherna/text-generation-webui/modules/chat.py", line 384, in chatbot_wrapper
output['visible'][-1][1] = apply_extensions('output', output['visible'][-1][1], state, is_chat=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/modules/extensions.py", line 231, in apply_extensions
return EXTENSION_MAP[typ](*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/modules/extensions.py", line 89, in _apply_string_extensions
text = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/extensions/sd_api_pictures/script.py", line 219, in output_modifier
string = get_SD_pictures(string, state['character_menu']) + "\n" + text
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/extensions/sd_api_pictures/script.py", line 186, in get_SD_pictures
give_VRAM_priority('LLM')
File "/home/mjherna/text-generation-webui/extensions/sd_api_pictures/script.py", line 58, in give_VRAM_priority
reload_model()
File "/home/mjherna/text-generation-webui/modules/models.py", line 403, in reload_model
shared.model, shared.tokenizer = load_model(shared.model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/modules/models.py", line 93, in load_model
output = load_func_map[loader](model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/modules/models.py", line 155, in huggingface_loader
config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=shared.args.trust_remote_code)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/models/auto/configuration_auto.py", line 1021, in from_pretrained
config_dict, unused_kwargs = PretrainedConfig.get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/configuration_utils.py", line 590, in get_config_dict
config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/configuration_utils.py", line 649, in _get_config_dict
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/home/mjherna/text-generation-webui/installer_files/env/lib/python3.11/site-packages/transformers/utils/hub.py", line 426, in cached_file
raise EnvironmentError(
OSError: models/None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'
If this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`
System Info
Pop_OS with Cinna KDE
Ryzen 7 7700x
GTX 4070
The text was updated successfully, but these errors were encountered:
Describe the bug
When using the extension to manage VRAM it fails to load the model.
Is there an existing issue for this?
Reproduction
Launch the webui. Click manage Vram. prompt for a picture. Connects to SD and generates, then upon trying to reload the LLM it fails and tries to download from the hugginface repo, but the model is None, so it fails.
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: