You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
And getting this error: ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Do I need to upgrade the transformers package? If so, how do I do that within oobabooga?
08:58:47-401818 INFO Loading "deepseek-ai_Janus-Pro-7B"
08:58:47-541349 ERROR Failed to load the model.
Traceback (most recent call last):
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1071, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 773, in __getitem__
raise KeyError(key)
KeyError: 'multi_modality'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 214, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 90, in load_model
output = load_func_map[loader](model_name)
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 152, in huggingface_loader
config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=shared.args.trust_remote_code)
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1073, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`
Screenshot
No response
Logs
08:58:47-401818 INFO Loading "deepseek-ai_Janus-Pro-7B"
08:58:47-541349 ERROR Failed to load the model.
Traceback (most recent call last):
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1071, in from_pretrained
config_class = CONFIG_MAPPING[config_dict["model_type"]]
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 773, in __getitem__
raise KeyError(key)
KeyError: 'multi_modality'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 214, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 90, in load_model
output = load_func_map[loader](model_name)
File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 152, in huggingface_loader
config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=shared.args.trust_remote_code)
File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1073, in from_pretrained
raise ValueError(
ValueError: The checkpoint you are trying to load has model type`multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
You can update Transformers with the command`pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command`pip install git+https://github.com/huggingface/transformers.git`
System Info
Windows 11
Nvidia RTX 3060 12GB
The text was updated successfully, but these errors were encountered:
Describe the bug
I'm trying to load this model: https://huggingface.co/deepseek-ai/Janus-Pro-7B
And getting this error:
ValueError: The checkpoint you are trying to load has model type
multi_modalitybut Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
Do I need to upgrade the transformers package? If so, how do I do that within oobabooga?
Is there an existing issue for this?
Reproduction
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: