Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is this fork still relevant with Stable Diffusion V2? #227

Open
brucethemoose opened this issue Nov 27, 2022 · 4 comments
Open

Is this fork still relevant with Stable Diffusion V2? #227

brucethemoose opened this issue Nov 27, 2022 · 4 comments

Comments

@brucethemoose
Copy link

brucethemoose commented Nov 27, 2022

Is the new release more optimized than the original?

If it is now, is there any way to use these changes in the new release?

@yoavain
Copy link

yoavain commented Nov 29, 2022

Stable diffusion 2 is a totally different repository
So the question is if there's going to be a new fork with similar improvements.

@brucethemoose
Copy link
Author

brucethemoose commented Nov 29, 2022

Stable diffusion 2 is a totally different repository So the question is if there's going to be a new fork with similar improvements.

I have come to learn that at least some of these optimizations (and more) are in the WebUI stable diffusion fork: https://github.com/AUTOMATIC1111/stable-diffusion-webui/

There is also a newer fork of this very repo, but I cant get it to work and it has GH issues disabled.

@brucethemoose
Copy link
Author

brucethemoose commented Nov 29, 2022

That updated repo is here: https://github.com/consciencia/stable-diffusion

There are so many SD forks that they are almost impossible to explore.

@rsandx
Copy link

rsandx commented Dec 3, 2022

I believe this fork is still relevant with Stable Diffusion V2 because the V2 model is even larger and requires more memory to load in whole. This fork works well with V1.4 and V1.5, as I currently have it running at https://aitransformer.net/SuperStylizer, using the V1.5 model and the server only has 8G VRAM. I inspected V2 code at https://github.com/Stability-AI/stablediffusion, the changes seem not too big, so I managed to update the code using the same way of splitting the model to run on GPU one part at a time, the model loading is fine but unfortunately the process got killed every time when I tried to run a simple txt2img, probably due to running out of memory.

......
UNet: Running in eps-prediction mode
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 5 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 640, context_dim is None and using 10 heads.
......
CondStage: Running in eps-prediction mode
Killed

It'd be great if the author can make it working for V2. I can share the code changes I have so far if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants