By Pontus Ebelin and Tomas Akenine-Möller, with Jim Nilsson, Magnus Oskarsson, Kalle Åström, Mark D. Fairchild, and Peter Shirley.
This repository holds implementations of the LDR-FLIP and HDR-FLIP image error metrics. It also holds code for the FLIP tool, presented in Ray Tracing Gems II.
The changes made for the different versions of FLIP are summarized in the version list.
A list of papers that use/cite FLIP.
A note about the precision of FLIP.
An image gallery displaying a large quantity of reference/test images and corresponding error maps from different metrics.
Note: since v1.6, the Python version of FLIP can now be installed via pip install flip-evaluator
.
Note: in v1.3, we switched to a single header (FLIP.h) for C++/CUDA for easier integration.
Copyright © 2020-2024, NVIDIA Corporation & Affiliates. All rights reserved.
This work is made available under a BSD 3-Clause License.
The repository distributes code for tinyexr
, which is subject to a BSD 3-Clause License,
and stb_image
, which is subject to an MIT License.
For individual contributions to the project, please confer the Individual Contributor License Agreement.
For business inquiries, please visit our website and submit the form: NVIDIA Research Licensing.
The simplest way to run FLIP to compare a test image testImage.png
to a reference image referenceImage.png
is as follows:
pip install flip-evaluator
flip -r referenceImage.png -t testImage.png
For more information about the tool's capabilities, try running flip -h
.
If you wish to use FLIP in your Python or C++ evaluation scripts, please read the next sections.
Setup (with pip):
pip install flip-evaluator
Usage:
API:
See the example script src/python/api_example.py
.
Tool:
flip --reference reference.{exr|png} --test test.{exr|png} [--options]
See the README in the python
folder and run flip -h
for further information and usage instructions.
Setup:
The src/cpp/FLIP.sln
solution contains one CUDA backend project and one pure C++ backend project.
Compiling the CUDA project requires a CUDA compatible GPU. Instruction on how to install CUDA can be found here.
Alternatively, a CMake build can be done by creating a build directory and invoking CMake on the source dir (add --config Release
to build release configuration on Windows):
mkdir build
cd build
cmake ..
cmake --build . [--config Release]
CUDA support is enabled via the FLIP_ENABLE_CUDA
, which can be passed to CMake on the command line with
-DFLIP_ENABLE_CUDA=ON
or set interactively with ccmake
or cmake-gui
.
FLIP_LIBRARY
option allows to output a library rather than an executable.
Usage:
API:
See the README.
Tool:
flip[-cuda].exe --reference reference.{exr|png} --test test.{exr|png} [options]
See the README in the src/cpp
folder and run flip[-cuda].exe -h
for further information and usage instructions.
Setup (with Anaconda3 or Miniconda):
conda create -n flip_dl python numpy matplotlib
conda activate flip_dl
conda install pytorch torchvision torchaudio cudatoolkit=11.1 -c pytorch -c conda-forge
conda install -c conda-forge openexr-python
Usage:
Remember to activate the flip_dl
environment through conda activate flip_dl
before using the loss function.
LDR- and HDR-FLIP are implemented as loss modules in src/pytorch/flip_loss.py
. An example where the loss function is used to train a simple autoencoder is provided in src/pytorch/train.py
.
See the README in the pytorch
folder for further information and usage instructions.
If your work uses the FLIP tool to find the errors between low dynamic range images,
please cite the LDR-FLIP paper:
Paper | BibTeX
If it uses the FLIP tool to find the errors between high dynamic range images,
instead cite the HDR-FLIP paper:
Paper | BibTeX
Should your work use the FLIP tool in a more general fashion, please cite the Ray Tracing Gems II article:
Chapter | BibTeX
We appreciate the following peoples' contributions to this repository: Jonathan Granskog, Jacob Munkberg, Jon Hasselgren, Jefferson Amstutz, Alan Wolfe, Killian Herveau, Vinh Truong, Philippe Dagobert, Hannes Hergeth, Matt Pharr, Tizian Zeltner, Jan Honsbrok, Chris Zhang, and Wenzel Jakob.