Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about inference time #214

Open
xlc-github opened this issue Oct 31, 2022 · 0 comments
Open

about inference time #214

xlc-github opened this issue Oct 31, 2022 · 0 comments

Comments

@xlc-github
Copy link

xlc-github commented Oct 31, 2022

hi,thank your for your code,I have test your optimized_txt2img.py,the inference time is indeed about 24-26 sec per image.does the inference time can decrease to 14-16sec if the sd model has been fragmented into two parts,if so,how can I do this.Thank you again!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant