Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transformers does not recognize multi-modality #6723

Open
1 task done
seajhawk opened this issue Feb 2, 2025 · 1 comment
Open
1 task done

Transformers does not recognize multi-modality #6723

seajhawk opened this issue Feb 2, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@seajhawk
Copy link

seajhawk commented Feb 2, 2025

Describe the bug

I'm trying to load this model: https://huggingface.co/deepseek-ai/Janus-Pro-7B

And getting this error:
ValueError: The checkpoint you are trying to load has model type multi_modality but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

Do I need to upgrade the transformers package? If so, how do I do that within oobabooga?

Is there an existing issue for this?

  • I have searched the existing issues

Reproduction

  1. Fresh install of oobabooga on Windows or MacOS
  2. Download the model: https://huggingface.co/deepseek-ai/Janus-Pro-7B
  3. Load the model: deepseek-ai_Janus-Pro-7B
  4. See this error message:
08:58:47-401818 INFO     Loading "deepseek-ai_Janus-Pro-7B"
08:58:47-541349 ERROR    Failed to load the model.
Traceback (most recent call last):
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1071, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 773, in __getitem__
    raise KeyError(key)
KeyError: 'multi_modality'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 90, in load_model
    output = load_func_map[loader](model_name)
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 152, in huggingface_loader
    config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=shared.args.trust_remote_code)
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1073, in from_pretrained
    raise ValueError(
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`

Screenshot

No response

Logs

08:58:47-401818 INFO     Loading "deepseek-ai_Janus-Pro-7B"
08:58:47-541349 ERROR    Failed to load the model.
Traceback (most recent call last):
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1071, in from_pretrained
    config_class = CONFIG_MAPPING[config_dict["model_type"]]
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 773, in __getitem__
    raise KeyError(key)
KeyError: 'multi_modality'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\ui_model_menu.py", line 214, in load_model_wrapper
    shared.model, shared.tokenizer = load_model(selected_model, loader)
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 90, in load_model
    output = load_func_map[loader](model_name)
  File "C:\git\votesentry\oobabooga_windows\text-generation-webui\modules\models.py", line 152, in huggingface_loader
    config = AutoConfig.from_pretrained(path_to_model, trust_remote_code=shared.args.trust_remote_code)
  File "C:\git\votesentry\oobabooga_windows\installer_files\env\lib\site-packages\transformers\models\auto\configuration_auto.py", line 1073, in from_pretrained
    raise ValueError(
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.

You can update Transformers with the command `pip install --upgrade transformers`. If this does not work, and the checkpoint is very new, then there may not be a release version that supports this model yet. In this case, you can get the most up-to-date code by installing Transformers from source with the command `pip install git+https://github.com/huggingface/transformers.git`

System Info

Windows 11
Nvidia RTX 3060 12GB
@seajhawk seajhawk added the bug Something isn't working label Feb 2, 2025
@gnusupport
Copy link

I would also like to run Janus-Pro-1B-LM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants