This repository is a modified version of the original DiT repository aimed at faster and more efficient inference using Quantization and GPU optimizations. It is not intended for training new models.
Paper | Project Page | Run DiT-XL/2
4 bit (NF or INT4 or FP4) storage and 8bit inference using int8 matrix multiplication of Diffusion Transformer model (DiT)
- Literature review on quantization methods. quant.md
- Use bnb to check int8 and int4 quantization. bnb.md
- Try simple quantization for storage and see the impact on inference latency and quality. (Since model is small, no outliers are there)
- Use tensor-int8 package to for int8 matrix multiplication.
- Check AMD support
- Use Q-Diffusion for improving the quantization of the model.
- Use GPTQ/kernels/exllamav2 kernels for int8 matrix multiplication
- Do research on improving the quantization of the model.
First, download and set up the repo:
git clone https://github.com/chuanyangjin/fast-DiT.git
cd DiT
We provide an environment.yml
file that can be used to create a Conda environment. If you only want
to run pre-trained models locally on CPU, you can remove the cudatoolkit
and pytorch-cuda
requirements from the file.
conda env create -f environment.yml
conda activate DiT
Pre-trained DiT checkpoints. You can sample from our pre-trained DiT models with sample.py
. Weights for our pre-trained DiT model will be
automatically downloaded depending on the model you use. The script has various arguments to switch between the 256x256
and 512x512 models, adjust sampling steps, change the classifier-free guidance scale, etc. For example, to sample from
our 512x512 DiT-XL/2 model, you can use:
python sample.py --image-size 512 --seed 1
For convenience, our pre-trained DiT models can be downloaded directly here as well:
DiT Model | Image Resolution | FID-50K | Inception Score | Gflops |
---|---|---|---|---|
XL/2 | 256x256 | 2.27 | 278.24 | 119 |
XL/2 | 512x512 | 3.04 | 240.82 | 525 |
Custom DiT checkpoints. If you've trained a new DiT model with train.py
(see below), you can add the --ckpt
argument to use your own checkpoint instead. For example, to sample from the EMA weights of a custom
256x256 DiT-L/4 model, run:
python sample.py --model DiT-L/4 --image-size 256 --ckpt /path/to/model.pt
We include a sample_ddp.py
script which samples a large number of images from a DiT model in parallel. This script
generates a folder of samples as well as a .npz
file which can be directly used with ADM's TensorFlow
evaluation suite to compute FID, Inception Score and
other metrics. For example, to sample 50K images from our pre-trained DiT-XL/2 model over N
GPUs, run:
torchrun --nnodes=1 --nproc_per_node=N sample_ddp.py --model DiT-XL/2 --num-fid-samples 50000
There are several additional options; see sample_ddp.py
for details.