Skip to content

[AAAI‘ 2025 ] "AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement".

License

Notifications You must be signed in to change notification settings

LYL1015/AGLLDiff

Repository files navigation

AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement

1Xiamen University, China  2The Hong Kong University of Science and Technology (Guangzhou), China  3The Hong Kong University of Science and Technology, Hong Kong SAR, China  4Tsinghua University, China  5University of Washington
*denotes equal contribution
🚩 Accepted to AAAI 2025

[arXiv][Project Page]

AGLLDiff provides a training-free framework for enhancing low-light images using diffusion models.

If you find AGLLDiff useful for your projects, please consider ⭐ this repo. Thank you! 😉

📮 Updates

  • 2024.2.9: Release our demo codes and models. Have fun! 😋
  • 2023.12.31: This repo is created.

♦️ Installation

Codes and Environment

# git clone this repository
git clone https://github.com/LYL1015/AGLLDiff.git
cd AGLLDiff

# create new anaconda env
conda create -n aglldiff python=3.8 -y
conda activate aglldiff

# install python dependencies
conda install mpi4py
pip3 install -r requirements.txt
pip install -e .

Pretrained Model

Download the pretrained diffusion model from guided-diffusion and the pretrained Rnet model from Google Drive. Place both models in the ckpt folder.

mkdir ckpt
cd ckpt
wget https://openaipublic.blob.core.windows.net/diffusion/jul-2021/256x256_diffusion_uncond.pt
cd ..

🎪 Inference

Example usage:

python inference_aglldiff.py --task LIE --in_dir ./examples/ --out_dir ./results/

There are other arguments you may want to change. You can change the hyperparameters using the command line.

For example, you can use the following command to run inference with customized settings:

python inference_aglldiff.py \
--in_dir ./examples/ \
--out_dir ./results/ \
--model_path "./ckpt/256x256_diffusion_uncond.pt" \
--retinex_model "./ckpt/RNet_1688_step.ckpt" \
--guidance_scale 2.3 \
--structure_weight 10 \
--color_map_weight 0.03 \
--exposure_weight 1000 \
--base_exposure 0.46 \
--adjustment_amplitude 0.25 \
--N 2 

Explanation of important arguments:

  • in_dir: Path to the folder containing input images.
  • out_dir: Path to the folder where results will be saved.
  • model_path: Path to the pretrained diffusion model checkpoint.
  • retinex_model: Path to the pretrained Retinex model checkpoint.
  • guidance_scale: Overall guidance scale for attribute control.
  • structure_weight: Weight for structure preservation.
  • color_map_weight: Weight for color mapping guidance.
  • exposure_weight: Weight for exposure adjustment.
  • base_exposure: Base exposure value for image enhancement.
  • adjustment_amplitude: Amplitude of contrast adjustment.
  • N: Number of gradient descent steps at each timestep.

🤟 Citation

If you find our work useful for your research, please consider citing the paper:

@misc{lin2024aglldiff,
  Author = {Yunlong Lin and Tian Ye and Sixiang Chen and Zhenqi Fu and Yingying Wang and Wenhao Chai and Zhaohu Xing and Lei Zhu and Xinghao Ding},
  Title  = {AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement},
  year      ={2024}, 
  eprint    ={2407.14900}, 
  archivePrefix={arXiv}, 
  primaryClass={cs.CV},
}

Contact

If you have any questions, please feel free to reach out at [email protected].

About

[AAAI‘ 2025 ] "AGLLDiff: Guiding Diffusion Models Towards Unsupervised Training-free Real-world Low-light Image Enhancement".

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages