You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It should be easy to implement if you are familiar with diffusers pipelines. I just don't want to make this project too redundant. But I can show you some guidance soon in my free time!
@haofanwang i'd also really like to know how to do this too... i don't really get why i can't seem to prepare my own latents for the pipeline's latents= option by using the vae to encode an init image the way the img2img or inpaint pipelines do. i keep getting mismatched tensor sizes when the pipeline tries to add the controlnet output to the sample in the timestep loop
any chance you can add an example of using control net with img2img to the Collab doc (without inpainting)?
I followed the instructions and tried adding the StableDiffusionControlNetInpaintImg2ImgPipeline class without any luck :
The text was updated successfully, but these errors were encountered: