This repo collects various distillation methods for the Diffusion model. Welcome to PR the works (papers, repositories) missed by the repo.
- [ICML]On Architectural Compression of Text-to-Image Diffusion [code]
- [ICML]Consistency Models [code]
- [ICML]Accelerating Diffusion-based Combinatorial Optimization Solvers by Progressive [code]
- [ICML]Towards Safe Self-Distillation of Internet-Scale Text-to-Image Diffusion [code]
- [ICME]Accelerating Diffusion Sampling with Classifier-based Feature [code]
- [CVPR]On Distillation of Guided Diffusion [code]
- [NeurIPs]SnapFusion: Text-to-Image Diffusion Model on Mobile Devices within Two Seconds
- [NeurIPs]Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
- [PMLR]Fast Sampling of Diffusion Models via Operator [code]
- [arxiv]BOOT: Data-free Distillation of Denoising Diffusion Models with Bootstrapping
- [arxiv]TRACT: Denoising Diffusion Models with Transitive Closure Time-Distillation
- [arxiv]Catch-Up Distillation: You Only Need to Train Once for Accelerating [code]
- [arxiv]Improved Techniques for Training Consistency [code]
- [arxiv]Adversarial Diffusion [code]