Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: Support for Flux depth and canny loras #3714

Open
2 tasks done
SAC020 opened this issue Jan 21, 2025 · 1 comment
Open
2 tasks done

[Feature]: Support for Flux depth and canny loras #3714

SAC020 opened this issue Jan 21, 2025 · 1 comment
Labels
enhancement New feature or request

Comments

@SAC020
Copy link

SAC020 commented Jan 21, 2025

Issue Description

Using flux dev model with on-the-fly bnb quantization and balanced offload

Trying to use either flux depth or canny loras fails with

07:02:29-530344 WARNING Load network: type=LoRA name="flux1-canny-dev-lora" type=set() unmatched=1094 matched=0 07:02:29-536960 ERROR Load network: type=LoRA name="flux1-canny-dev-lora" detected=f1 failed

Version Platform Description

06:59:32-163410 INFO Python: version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe"
venv="C:\ai\automatic\venv"
06:59:32-398015 INFO Version: app=sd.next updated=2025-01-16 hash=e22d0789 branch=master
url=https://github.com/vladmandic/automatic/tree/master ui=main
06:59:33-066291 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
release=Windows-10-10.0.26100-SP0 python=3.11.9 docker=False

Relevant log output

PS C:\ai\automatic> .\webui.bat --debug
Using VENV: C:\ai\automatic\venv
06:59:32-158598 INFO     Starting SD.Next
06:59:32-161908 INFO     Logger: file="C:\ai\automatic\sdnext.log" level=DEBUG size=65 mode=create
06:59:32-163410 INFO     Python: version=3.11.9 platform=Windows bin="C:\ai\automatic\venv\Scripts\python.exe"
                         venv="C:\ai\automatic\venv"
06:59:32-398015 INFO     Version: app=sd.next updated=2025-01-16 hash=e22d0789 branch=master
                         url=https://github.com/vladmandic/automatic/tree/master ui=main
06:59:33-066291 INFO     Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows
                         release=Windows-10-10.0.26100-SP0 python=3.11.9 docker=False
06:59:33-069313 DEBUG    Packages: venv=venv site=['venv', 'venv\\Lib\\site-packages']
06:59:33-070888 INFO     Args: ['--debug']
06:59:33-071885 DEBUG    Setting environment tuning
06:59:33-072884 DEBUG    Torch allocator: "garbage_collection_threshold:0.80,max_split_size_mb:512"
06:59:33-083746 DEBUG    Torch overrides: cuda=False rocm=False ipex=False directml=False openvino=False zluda=False
06:59:33-096881 INFO     CUDA: nVidia toolkit detected
06:59:33-274512 INFO     Install: verifying requirements
06:59:33-324837 DEBUG    Timestamp repository update time: Thu Jan 16 18:54:17 2025
06:59:33-326324 INFO     Startup: standard
06:59:33-327423 INFO     Verifying submodules
06:59:36-502867 DEBUG    Git submodule: extensions-builtin/sd-extension-chainner / main
06:59:36-598677 DEBUG    Git submodule: extensions-builtin/sd-extension-system-info / main
06:59:36-692804 DEBUG    Git submodule: extensions-builtin/sd-webui-agent-scheduler / main
06:59:36-833407 DEBUG    Git detached head detected: folder="extensions-builtin/sdnext-modernui" reattach=main
06:59:36-834895 DEBUG    Git submodule: extensions-builtin/sdnext-modernui / main
06:59:36-932952 DEBUG    Git submodule: extensions-builtin/stable-diffusion-webui-rembg / master
06:59:37-024661 DEBUG    Git submodule: modules/k-diffusion / master
06:59:37-122377 DEBUG    Git submodule: wiki / master
06:59:37-176912 DEBUG    Register paths
06:59:37-258164 DEBUG    Installed packages: 188
06:59:37-259444 DEBUG    Extensions all: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
06:59:37-501037 DEBUG    Extension installer: C:\ai\automatic\extensions-builtin\sd-webui-agent-scheduler\install.py
06:59:39-770861 DEBUG    Extension installer: C:\ai\automatic\extensions-builtin\stable-diffusion-webui-rembg\install.py
06:59:46-144164 DEBUG    Extensions all: []
06:59:46-145155 INFO     Extensions enabled: ['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
                         'sd-webui-agent-scheduler', 'sdnext-modernui', 'stable-diffusion-webui-rembg']
06:59:46-146147 INFO     Install: verifying requirements
06:59:46-147139 DEBUG    Setup complete without errors: 1737435586
06:59:46-152099 DEBUG    Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
06:59:46-153587 INFO     Command line args: ['--debug'] debug=True args=[]
06:59:46-154579 DEBUG    Env flags: []
06:59:46-155571 DEBUG    Starting module: <module 'webui' from 'C:\\ai\\automatic\\webui.py'>
06:59:54-028174 INFO     Device detect: memory=24.0 default=balanced
06:59:54-034266 DEBUG    Read: file="config.json" json=39 bytes=1793 time=0.000 fn=<module>:load
06:59:54-334309 INFO     Engine: backend=Backend.DIFFUSERS compute=cuda device=cuda attention="Scaled-Dot-Product"
                         mode=no_grad
06:59:54-335811 DEBUG    Read: file="html\reference.json" json=63 bytes=33413 time=0.000
                         fn=_call_with_frames_removed:<module>
06:59:54-389911 INFO     Torch parameters: backend=cuda device=cuda config=Auto dtype=torch.bfloat16 context=no_grad
                         nohalf=False nohalfvae=False upcast=False deterministic=False fp16=pass bf16=pass
                         optimization="Scaled-Dot-Product"
06:59:54-719671 DEBUG    ONNX: version=1.20.1 provider=CUDAExecutionProvider, available=['AzureExecutionProvider',
                         'CPUExecutionProvider']
06:59:54-901564 INFO     Device: device=NVIDIA GeForce RTX 4090 n=1 arch=sm_90 capability=(8, 9) cuda=12.4 cudnn=90100
                         driver=566.36
06:59:55-688443 INFO     Torch: torch==2.5.1+cu124 torchvision==0.20.1+cu124
06:59:55-689948 INFO     Packages: diffusers==0.33.0.dev0 transformers==4.47.1 accelerate==1.2.1 gradio==3.43.2
06:59:55-851881 DEBUG    Entering start sequence
06:59:55-856634 DEBUG    Initializing
06:59:55-865597 DEBUG    Read: file="metadata.json" json=172 bytes=411331 time=0.006 fn=initialize:init_metadata
06:59:55-867849 DEBUG    Huggingface cache: path="C:\Users\sebas\.cache\huggingface\hub"
06:59:55-952855 INFO     Available VAEs: path="models\VAE" items=0
06:59:55-955803 INFO     Available UNets: path="models\UNET" items=0
06:59:55-958325 INFO     Available TEs: path="models\Text-encoder" items=4
06:59:55-971068 INFO     Available Models: items=16 safetensors="models\Stable-diffusion":9
                         diffusers="models\Diffusers":7 time=0.01
06:59:56-003730 INFO     Available Styles: folder="models\styles" items=288 time=0.03
06:59:56-136184 INFO     Available Yolo: path="models\yolo" items=7 downloaded=3
06:59:56-139160 DEBUG    Extensions: disabled=['sdnext-modernui']
06:59:56-140648 INFO     Load extensions
06:59:56-254134 INFO     Available LoRAs: path="models\Lora" items=163 folders=3 time=0.01
06:59:56-501075 DEBUG    Register network: type=LoRA method=legacy
06:59:57-414029 INFO     Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
                         sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
06:59:57-418338 DEBUG    Extensions init time: total=1.28 sd-webui-agent-scheduler=0.86 Lora=0.27
06:59:57-428199 DEBUG    Read: file="html/upscalers.json" json=4 bytes=2672 time=0.001 fn=__init__:__init__
06:59:57-429847 DEBUG    Read: file="extensions-builtin\sd-extension-chainner\models.json" json=24 bytes=2719 time=0.000
                         fn=__init__:find_scalers
06:59:57-432713 DEBUG    chaiNNer models: path="models\chaiNNer" defined=24 discovered=0 downloaded=8
06:59:57-434726 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="1x-ITF-SkinDiffDetail-Lite-v1"
                         path="models\ESRGAN\1x-ITF-SkinDiffDetail-Lite-v1.pth"
06:59:57-435747 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4xNMKDSuperscale_4xNMKDSuperscale"
                         path="models\ESRGAN\4xNMKDSuperscale_4xNMKDSuperscale.pth"
06:59:57-436752 DEBUG    Upscaler type=ESRGAN folder="models\ESRGAN" model="4x_NMKD-Siax_200k"
                         path="models\ESRGAN\4x_NMKD-Siax_200k.pth"
06:59:57-440271 INFO     Available Upscalers: items=55 downloaded=11 user=3 time=0.02 types=['None', 'Lanczos',
                         'Nearest', 'ChaiNNer', 'AuraSR', 'ESRGAN', 'RealESRGAN', 'SCUNet', 'SD', 'SwinIR']
06:59:57-445234 DEBUG    UI start sequence
06:59:57-445974 WARNING  Networks: type=lora method=legacy
06:59:57-448030 INFO     UI theme: type=Standard name="black-teal" available=13
06:59:57-456852 DEBUG    UI theme: css="C:\ai\automatic\javascript\black-teal.css" base="sdnext.css" user="None"
06:59:57-459346 DEBUG    UI initialize: txt2img
06:59:57-735371 DEBUG    Networks: page='model' items=78 subfolders=2 tab=txt2img folders=['models\\Stable-diffusion',
                         'models\\Diffusers', 'models\\Reference'] list=0.07 thumb=0.01 desc=0.01 info=0.00 workers=8
06:59:57-740560 DEBUG    Networks: page='lora' items=163 subfolders=0 tab=txt2img folders=['models\\Lora',
                         'models\\LyCORIS'] list=0.08 thumb=0.02 desc=0.08 info=0.05 workers=8
06:59:57-749899 DEBUG    Networks: page='style' items=288 subfolders=1 tab=txt2img folders=['models\\styles', 'html']
                         list=0.07 thumb=0.00 desc=0.00 info=0.00 workers=8
06:59:57-753897 DEBUG    Networks: page='embedding' items=13 subfolders=0 tab=txt2img folders=['models\\embeddings']
                         list=0.05 thumb=0.02 desc=0.01 info=0.00 workers=8
06:59:57-755800 DEBUG    Networks: page='vae' items=0 subfolders=0 tab=txt2img folders=['models\\VAE'] list=0.00
                         thumb=0.00 desc=0.00 info=0.00 workers=8
06:59:57-757691 DEBUG    Networks: page='history' items=0 subfolders=0 tab=txt2img folders=[] list=0.00 thumb=0.00
                         desc=0.00 info=0.00 workers=8
06:59:58-052247 DEBUG    UI initialize: img2img
06:59:58-245923 DEBUG    UI initialize: control models=models\control
06:59:58-746674 DEBUG    Read: file="ui-config.json" json=0 bytes=2 time=0.000 fn=__init__:read_from_file
06:59:59-118995 DEBUG    Reading failed: C:\ai\automatic\html\extensions.json [Errno 2] No such file or directory:
                         'C:\\ai\\automatic\\html\\extensions.json'
06:59:59-119873 INFO     Extension list is empty: refresh required
06:59:59-647923 DEBUG    Extension list: processed=6 installed=6 enabled=5 disabled=1 visible=6 hidden=0
07:00:00-007507 DEBUG    Root paths: ['C:\\ai\\automatic']
07:00:00-104601 INFO     Local URL: http://127.0.0.1:7860/
07:00:00-107081 DEBUG    API middleware: [<class 'starlette.middleware.base.BaseHTTPMiddleware'>, <class
                         'starlette.middleware.gzip.GZipMiddleware'>]
07:00:00-110106 DEBUG    API initialize
07:00:00-312482 INFO     [AgentScheduler] Task queue is empty
07:00:00-313663 INFO     [AgentScheduler] Registering APIs
07:00:00-450242 DEBUG    Scripts setup: time=0.396 ['K-Diffusion Samplers:0.116', 'XYZ Grid:0.042', 'IP Adapters:0.036',
                         'Face: Multiple ID Transfers:0.017', 'Video: CogVideoX:0.011', 'FreeScale: Tuning-Free Scale
                         Fusion:0.01']
07:00:00-451240 DEBUG    Model metadata: file="metadata.json" no changes
07:00:00-453856 DEBUG    Model requested: fn=run:<lambda>
07:00:00-454992 INFO     Load model: select="Diffusers\black-forest-labs/FLUX.1-dev [0ef5fff789]"
07:00:00-459306 DEBUG    Load model: type=FLUX model="Diffusers\black-forest-labs/FLUX.1-dev"
                         repo="black-forest-labs/FLUX.1-dev" unet="None" te="None" vae="Automatic" quant=none
                         offload=balanced dtype=torch.bfloat16
07:00:00-984437 INFO     HF login: token="C:\Users\sebas\.cache\huggingface\token"
07:00:01-182852 DEBUG    GC: current={'gpu': 1.59, 'ram': 1.02, 'oom': 0} prev={'gpu': 1.6, 'ram': 1.02} load={'gpu': 7,
                         'ram': 2} gc={'gpu': 0.01, 'py': 11108} fn=load_diffuser_force:load_flux why=force time=0.20
07:00:01-185045 DEBUG    Load model: type=FLUX cls=FluxPipeline preloaded=[] revision=None
07:00:01-186472 DEBUG    Quantization: type=bitsandbytes version=0.45.0 fn=load_quants:create_bnb_config
07:00:01-188219 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
Diffusers 5971.96it/s ████████████████████ 100% 3/3 00:00 00:00 Fetching 3 files
07:00:23-722857 DEBUG    Quantization: module=transformer type=bnb dtype=nf4 storage=uint8
Downloading shards: 100%|██████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1981.72it/s]
Diffusers  3.86s/it █████████████ 100% 2/2 00:07 00:00 Loading checkpoint shards
07:00:31-975264 DEBUG    Quantization: module=t5 type=bnb dtype=nf4 storage=uint8
07:00:31-977744 DEBUG    Quantization: module=all type=bnb dtype=nf4 storage=uint8
Diffusers  8.36it/s ████████ 100% 7/7 00:00 00:00 Loading pipeline components...
07:00:33-140642 DEBUG    Setting model: component=VAE slicing=True
07:00:33-152384 DEBUG    Setting model: attention="Scaled-Dot-Product"
07:00:33-184271 INFO     Offload: type=balanced op=init watermark=0.25-0.7 gpu=5.997-16.793:23.99 cpu=63.920 limit=0.00
07:00:35-695446 DEBUG    Model module=text_encoder_2 type=T5EncoderModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=2.748 size=5.683
07:00:38-581734 DEBUG    Model module=transformer type=FluxTransformer2DModel dtype=torch.bfloat16
                         quant=QuantizationMethod.BITS_AND_BYTES params=5.543 size=5.546
07:00:38-583956 DEBUG    Model module=text_encoder type=CLIPTextModel dtype=torch.bfloat16 quant=None params=0.115
                         size=0.229
07:00:38-586229 DEBUG    Model module=vae type=AutoencoderKL dtype=torch.bfloat16 quant=None params=0.078 size=0.156
07:00:38-681399 INFO     Model class=FluxPipeline modules=4 size=11.615
07:00:38-689936 INFO     Load model: time=total=38.23 load=32.68 move=5.50 native=1024 memory={'ram': {'used': 19.23,
                         'total': 63.92}, 'gpu': {'used': 1.64, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:00:38-693527 DEBUG    Script init: ['system-info.py:app_started=0.08', 'task_scheduler.py:app_started=0.15']
07:00:38-694664 INFO     Startup time: total=66.94 checkpoint=38.24 torch=23.94 launch=14.57 installer=14.41
                         extensions=1.28 ui-extensions=0.63 ui-networks=0.46 ui-settings=0.33 ui-defaults=0.26
                         ui-txt2img=0.26 app-started=0.23 ui-control=0.23 libraries=0.15 ui-img2img=0.14 detailer=0.13
                         api=0.11 samplers=0.10 ui-models=0.06 ui-extras=0.06 ui-gallery=0.06
07:00:38-696951 DEBUG    Save: file="config.json" json=39 bytes=1729 time=0.004
07:01:40-991697 INFO     API None 200 http/1.1 GET /sdapi/v1/motd 127.0.0.1 0.1742
07:01:44-562451 INFO     API None 200 http/1.1 GET /sdapi/v1/sd-models 127.0.0.1 0.003
07:01:44-966370 INFO     Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)
                         AppleWebKit/537.36 (KHTML, like Gecko) Chrome/131.0.0.0 Safari/537.36 Edg/131.0.0.0
07:01:44-968989 INFO     API None 200 http/1.1 GET /sdapi/v1/start 127.0.0.1 0.0031
07:01:44-985326 INFO     UI: ready time=6.454
07:01:45-942677 DEBUG    UI: connected
07:01:59-740893 DEBUG    Server: alive=True requests=87 memory=19.24/63.92 status='running' task='Load'
                         timestamp='20250121070000' id='' job=0 jobs=0 total=1 step=0 steps=0 queued=0 uptime=129
                         elapsed=119.29 eta=None progress=0
07:02:26-539412 DEBUG    Pipeline class change: original=FluxPipeline target=FluxImg2ImgPipeline device=cpu
                         fn=process_images_inner:init
07:02:26-549007 DEBUG    Image resize: source=1024:1024 target=1024:1024 mode="Fixed" upscaler="None" type=image
                         time=0.00 fn=process_images_inner:init
07:02:26-566085 DEBUG    Sampler: "default" class=FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000,
                         'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15,
                         'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal':
                         None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False}
07:02:28-510537 DEBUG    Activate network: type=LoRA model="Diffusers\black-forest-labs/FLUX.1-dev [0ef5fff789]"
Load network: C:\ai\automatic\models\Lora\flux1-canny-dev-lora.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 GB -:--:--
07:02:28-641610 WARNING  Load network: type=LoRA name="flux1-canny-dev-lora" type=set() unmatched=1094 matched=0
07:02:28-648677 ERROR    Load network: type=LoRA name="flux1-canny-dev-lora" detected=f1 failed
07:02:29-239423 INFO     Base: pipeline=FluxImg2ImgPipeline task=IMAGE_2_IMAGE batch=1/1x1 set={'guidance_scale': 6,
                         'generator': 'cuda:[1778098256]', 'num_inference_steps': 67, 'output_type': 'latent', 'image':
                         [<PIL.Image.Image image mode=RGB size=1024x1024 at 0x23C56C1AA90>], 'strength': 0.3, 'width':
                         1024, 'height': 1024, 'parser': 'native', 'prompt': 'embeds'}
Load network: C:\ai\automatic\models\Lora\flux1-canny-dev-lora.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 GB -:--:--
07:02:29-530344 WARNING  Load network: type=LoRA name="flux1-canny-dev-lora" type=set() unmatched=1094 matched=0
07:02:29-536960 ERROR    Load network: type=LoRA name="flux1-canny-dev-lora" detected=f1 failed
Progress  1.08it/s ████▊                               14% 3/21 00:03 00:16 Base07:02:34-565380 DEBUG    VAE load: type=approximate model="models\VAE-approx\model.pt"
Progress  1.29it/s █████████████████████████████████ 100% 21/21 00:16 00:00 Base
07:02:49-503344 DEBUG    Decode: vae="default" upcast=False slicing=True tiling=False latents=torch.Size([1, 16, 128,
                         128]):cuda:0:torch.bfloat16 time=1.302
07:02:49-541757 DEBUG    Pipeline class change: original=FluxImg2ImgPipeline target=FluxPipeline device=cpu
                         fn=process_images:process_images_inner
07:02:49-543741 INFO     Processed: images=1 its=0.87 time=23.01 timers={'pipeline': 17.93, 'move': 2.98, 'decode':
                         2.04, 'encode': 1.94, 'offload': 1.31, 'gc': 0.11} memory={'ram': {'used': 20.2, 'total':
                         63.92}, 'gpu': {'used': 2.6, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:02:49-777362 DEBUG    GC: current={'gpu': 2.09, 'ram': 20.2, 'oom': 0} prev={'gpu': 2.6, 'ram': 20.2} load={'gpu': 9,
                         'ram': 32} gc={'gpu': 0.51, 'py': 478} fn=process_images:process_images_inner why=final
                         time=0.23
07:02:50-103087 DEBUG    Save temp: image="C:\Users\sebas\AppData\Local\Temp\gradio\tmpczxo9b4y.png" width=1024
                         height=1024 size=1677227
07:03:00-950044 DEBUG    Pipeline class change: original=FluxPipeline target=FluxImg2ImgPipeline device=cpu
                         fn=process_images_inner:init
07:03:00-962612 DEBUG    Image resize: source=1024:1024 target=1024:1024 mode="Fixed" upscaler="None" type=image
                         time=0.00 fn=process_images_inner:init
07:03:00-965051 DEBUG    Sampler: "default" class=FlowMatchEulerDiscreteScheduler: {'num_train_timesteps': 1000,
                         'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15,
                         'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal':
                         None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False}
Load network: C:\ai\automatic\models\Lora\flux1-depth-dev-lora.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 GB -:--:--
07:03:01-982519 WARNING  Load network: type=LoRA name="flux1-depth-dev-lora" type=set() unmatched=1094 matched=0
07:03:01-989746 ERROR    Load network: type=LoRA name="flux1-depth-dev-lora" detected=f1 failed
07:03:02-556206 INFO     Base: pipeline=FluxImg2ImgPipeline task=IMAGE_2_IMAGE batch=1/1x1 set={'guidance_scale': 6,
                         'generator': 'cuda:[2178962404]', 'num_inference_steps': 67, 'output_type': 'latent', 'image':
                         [<PIL.Image.Image image mode=RGB size=1024x1024 at 0x23C56F78310>], 'strength': 0.3, 'width':
                         1024, 'height': 1024, 'parser': 'native', 'prompt': 'embeds'}
Load network: C:\ai\automatic\models\Lora\flux1-depth-dev-lora.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/1.2 GB -:--:--
07:03:02-619535 WARNING  Load network: type=LoRA name="flux1-depth-dev-lora" type=set() unmatched=1094 matched=0
07:03:02-626374 ERROR    Load network: type=LoRA name="flux1-depth-dev-lora" detected=f1 failed
Progress  1.29it/s █████████████████████████████████ 100% 21/21 00:16 00:00 Base
07:03:19-942222 DEBUG    Decode: vae="default" upcast=False slicing=True tiling=False latents=torch.Size([1, 16, 128,
                         128]):cuda:0:torch.bfloat16 time=0.027
07:03:20-111577 DEBUG    Pipeline class change: original=FluxImg2ImgPipeline target=FluxPipeline device=cpu
                         fn=process_images:process_images_inner
07:03:20-113595 INFO     Processed: images=1 its=1.04 time=19.16 timers={'pipeline': 16.55, 'move': 1.67, 'offload':
                         1.32, 'decode': 0.9, 'encode': 0.85, 'gc': 0.11} memory={'ram': {'used': 20.16, 'total':
                         63.92}, 'gpu': {'used': 7.01, 'total': 23.99}, 'retries': 0, 'oom': 0}
07:03:20-365659 DEBUG    GC: current={'gpu': 2.09, 'ram': 20.16, 'oom': 0} prev={'gpu': 7.01, 'ram': 20.16} load={'gpu':
                         9, 'ram': 32} gc={'gpu': 4.92, 'py': 303} fn=process_images:process_images_inner why=final
                         time=0.25
07:03:20-690552 DEBUG    Save temp: image="C:\Users\sebas\AppData\Local\Temp\gradio\tmpnxuwm0zl.png" width=1024
                         height=1024 size=1687176

Backend

Diffusers

UI

Standard

Branch

Master

Model

FLUX.1

Acknowledgements

  • I have read the above and searched for existing issues
  • I confirm that this is classified correctly and its not an extension issue
@vladmandic
Copy link
Owner

From changelog, its actually explicitly written that Depth/Canny LoRAs are NOT supported at the moment.

Image

They are not normal LoRAs by any means. I could add support, but then it would require same silly handholding for preparing input params as using Depth/Canny standalone models which is counter-intuitive for LoRA usage.
If there is large demand, I'll add it.
For now, converting this issue into feature request.

@vladmandic vladmandic changed the title [Issue]: Unable to load Flux depth and canny loras ( [Feature]: Support for Flux depth and canny loras Jan 21, 2025
@vladmandic vladmandic added the enhancement New feature or request label Jan 21, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants