Skip to content
This repository has been archived by the owner on Mar 14, 2023. It is now read-only.

Latest commit

 

History

History
105 lines (72 loc) · 3.68 KB

README.md

File metadata and controls

105 lines (72 loc) · 3.68 KB

Stable Diffusion ONNX UI

Dead simple gui with support for latest Diffusers (v0.12.0) on Windows with AMD graphic cards (or CPU, thanks to ONNX and DirectML) with Stable Diffusion 2.1 or any other model, even inpainting finetuned ones.

Supported schedulers: DDIM, LMS, PNDM, Euler.

Built with Gradio.

image

image

First installation

Prerequisites

From an empty folder:

python -m venv venv
.\venv\Scripts\activate
python -m pip install --upgrade pip
pip install wheel wget
pip install git+https://github.com/huggingface/diffusers.git
pip install transformers onnxruntime onnx gradio torch ftfy spacy scipy OmegaConf accelerate
pip install onnxruntime-directml --force-reinstall
pip install protobuf==3.20.2
python -m wget https://raw.githubusercontent.com/JbPasquier/stable-diffusion-onnx-ui/main/app.py
python -m wget https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_original_stable_diffusion_to_diffusers.py -o convert_original_stable_diffusion_to_diffusers.py
python -m wget https://raw.githubusercontent.com/huggingface/diffusers/main/scripts/convert_stable_diffusion_checkpoint_to_onnx.py -o convert_stable_diffusion_checkpoint_to_onnx.py
python -m wget https://raw.githubusercontent.com/runwayml/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml -o v1-inference.yaml
python -m wget https://raw.githubusercontent.com/runwayml/stable-diffusion/main/configs/stable-diffusion/v1-inpainting-inference.yaml -o v1-inpainting-inference.yaml
mkdir model

How to add models

Stable Diffusion

python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="stabilityai/stable-diffusion-2-1" --output_path="model/stable_diffusion_onnx"

Stable Diffusion Inpainting

python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="stabilityai/stable-diffusion-2-inpainting" --output_path="model/stable_diffusion_inpainting_onnx"

Other from Hugging Face

python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="nitrosocke/Nitro-Diffusion" --output_path="model/nitro_diffusion_onnx"

Other from somewhere else

Replace some_file.ckpt with the path to your ckpt one.

python convert_original_stable_diffusion_to_diffusers.py --checkpoint_path="./some_file.ckpt" --dump_path="./some_file"
python convert_stable_diffusion_checkpoint_to_onnx.py --model_path="./some_file" --output_path="model/some_onnx"

Run

# Ensure that you are in the virtualenv
.\venv\Scripts\activate

# Your computer only
python app.py

# Local network
python app.py --local

# The whole internet
python app.py --share

# Use CPU instead of AMD GPU
python app.py --cpu-only

Notice that inpainting provide way better results with a proper model like stable-diffusion-inpainting

Updating

Remove venv folder and *.py files and restart the First installation process.

Credits

Inspired by: