Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Checking for unmatched keys when loading Lora is not a versioning system. #838

Open
4 of 6 tasks
zixaphir opened this issue Jul 7, 2024 · 3 comments
Open
4 of 6 tasks

Comments

@zixaphir
Copy link

zixaphir commented Jul 7, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Some Lora will not load under an erroneous "version mismatch" error because they contain too many unmatched keys (due to

if len(lora_unmatch) > 12:
print(f'[LORA] LoRA version mismatch for {model_flag}: {filename}')
return model, clip
). While this can prevent Lora from loading that were trained under newer methods that might not have been entirely implemented yet, the count of "unmatched keys" is arbitrary and does not match the reality that, despite not aligning 100% to what has been already implemented, Lora models often work to a lesser but usable degree even if every key has not been implemented. This code is, despite documentation commentary in the referenced file, neither reference only nor taken wholesale from the originating repo, and this blocking check is unique to forge. While it is understandable that version checking can prevent issues with incompatibility, models such as 4th Tail that are perfectly usable despite these missing keys are blocked by this check.

Steps to reproduce the problem

For the example Lora listed above,

  1. Load Forge with a PonyXL-based model or mix.
  2. Place the 4th Tail Lora in the model prompt
  3. Hit Generate

What should have happened?

The 4th Tail Lora should apply usable weights to the running PonyXL-based model

What browsers do you use to access the UI ?

Mozilla Firefox

Sysinfo

N/A

Console logs

To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 15.9s (prepare environment: 2.1s, import torch: 3.6s, import gradio: 0.5s, setup paths: 0.6s, other imports: 0.3s, list SD models: 0.3s, load scripts: 2.8s, scripts before_ui_callback: 1.3s, create ui: 3.6s, gradio launch: 0.2s, app_started_callback: 0.5s).
X/Y/Z plot will create 2 images on 1 1x2 grid. (Total steps to process: 40)
                                  [LORA] LoRA version mismatch for SDXL: ###/Lora/4th_tail_v0.4.0_lyco_extract_l.safetensors

Additional information

No response

@Panchovix
Copy link

Wondering, if you remove or increase the limit, does the 4th extract works?

@zixaphir
Copy link
Author

zixaphir commented Jul 8, 2024

Wondering, if you remove or increase the limit, does the 4th extract works?

Does it load the weights of implemented keys? Yes.
Does it load every key? No. In my testing, concepts from the Lora appeared to work, but I noticed anatomy errors and a slightly "smeared" look. However, removing these checks makes it work identically to comfyui, minus a few additions comfy currently has for Cascade, SD3, and Transformers 2/3 Lora that are not present in the current ldm_patched/modules/lora.py

I also verified that while comfy also fails to load many of the keys present in 4th Tail, it loads the Lora anyways.

@Panchovix
Copy link

I did a "fix" into reForge for this, can you try if it works?

Panchovix@9cdd94e#diff-7bb995d67f7049a5bbeaac6560d59c575080b09ebd818c819c38a39b9533c673

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants