Replies: 3 comments
-
Hi! Sorry for the slow reply, and thanks for the scripts, they look awesome. I've been thinking a lot about import/export between Parseq and other tools, and your work is a brilliant kickstart for any users wanting such functionality for Blender and Ableton. During prompt overlap, the overlapping prompts are combined using Composable Diffusion: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#composable-diffusion . That is, the prompts are appended with a weight and the Given you have 3 loras in each prompt, 2 overlapping prompts will result in 6 references to loras in the final prompts – and that's without counting the negative prompts. Even though each occurrence of the same duplicated lora will be weighted to a combined 1.0 via the composable diffusion weights, I wonder whether the mere presence of so many references to lora triggers issues... |
Beta Was this translation helpful? Give feedback.
-
I'll make a mental note to test some things out. I can test the multi-lora out, I also merged the loras into my model recently so can test that, and the model alone as well. |
Beta Was this translation helpful? Give feedback.
-
It's def the wacky lora that I used. I converted my model to loral then used that with hybrid versions. Here's everything working find and overlapping amazing https://twitter.com/KewkD/status/1671178973194375169 |
Beta Was this translation helpful? Give feedback.
-
I have a feeling this issue is related to my custom model. But wanted to reach out to see if I could get more info that could rule that out. During transitions of the prompt it kinda freaks out and gets very incoherent. This is just for the overlap frames tho, before and after transition it's fine.
One thing to note, in this video it's VERY bad and very noticeable https://twitter.com/KewkD/status/1668070802137444352. I did some Frankenstein experiments with my model, extracting a lora out of it and messing around. Doing this, it led to the extreme version of the issue.
Here's a link to parseq https://sd-parseq.web.app/deforum?importRemote=Gw9ppvNb1gOA5qH5XH9joc5VmMQ2%2Fdoc-3645d65d-de01-4b7d-b4be-93c21e746204-1686534260011.json&token=5f7a5338-68fa-4ca2-8c68-3c96e8f66342
Here's the settings file: https://gist.github.com/KewkLW/a84546840e5e2599b5fb5e3703222a96
Here's the SRT https://gist.github.com/KewkLW/188318d37fb536c953d4bebc11396823
Here's a video with just my custom model, you can still see it but it's not as bad. https://twitter.com/KewkD/status/1666607648400351233
I'm still testing on my end but wanted to get this up while it's still active in my head >__<
And lastly, if you are puzzled by my camera settings, I have a script that I've been writing that exports keyframe data from Blender into importable JSON that I can then merge with parseq. Currently, it's writing every single frame but going to fix that so it only writes the first and last frames for the same values.
Beta Was this translation helpful? Give feedback.
All reactions