Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add wandb support. #127

Merged
merged 10 commits into from
May 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 3 additions & 2 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,13 @@ Added
- ``lensless.utils.dataset.simulate_dataset`` for simulating a dataset given a mask/PSF.
- Support for training/testing with multiple mask patterns in the dataset.
- Multi-GPU support for training.
- DigiCam dataset which interfaces with Hugging Face.
- Dataset which interfaces with Hugging Face (``lensless.utils.dataset.HFDataset``).
- Scripts for authentication.
- DigiCam support for Telegram demo.
- DiffuserCamMirflickr Hugging Face API.
- Fallback for normalization if data not in 8bit range (``lensless.utils.io.save_image``).
- Add utilities for fabricating masks with 3D printing (``lensless.hardware.fabrication``).
- WandB support.

Changed
~~~~~~~
Expand Down Expand Up @@ -151,7 +152,7 @@ Added
- Option to warm-start reconstruction algorithm with ``initial_est``.
- TrainableReconstructionAlgorithm class inherited from ReconstructionAlgorithm and torch.module for use with pytorch autograd and optimizers.
- Unrolled version of FISTA and ADMM as TrainableReconstructionAlgorithm with learnable parameters.
- ``train_unrolled.py`` script for training unrolled algorithms.
- ``train_learning_based.py`` script for training unrolled algorithms.
- ``benchmark_recon.py`` script for benchmarking and comparing reconstruction algorithms.
- Added ``reconstruction_error`` to ``ReconstructionAlgorithm`` .
- Added support for npy/npz image in load_image.
Expand Down
2 changes: 1 addition & 1 deletion README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ The toolkit includes:
* Measurement scripts (`link <https://lensless.readthedocs.io/en/latest/measurement.html>`__).
* Dataset preparation and loading tools, with `Hugging Face <https://huggingface.co/bezzam>`__ integration (`slides <https://docs.google.com/presentation/d/18h7jTcp20jeoiF8dJIEcc7wHgjpgFgVxZ_bJ04W55lg/edit?usp=sharing>`__ on uploading a dataset to Hugging Face with `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/data/upload_dataset_huggingface.py>`__).
* `Reconstruction algorithms <https://lensless.readthedocs.io/en/latest/reconstruction.html>`__ (e.g. FISTA, ADMM, unrolled algorithms, trainable inversion, pre- and post-processors).
* `Training script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_unrolled.py>`__ for learning-based reconstruction.
* `Training script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/train_learning_based.py>`__ for learning-based reconstruction.
* `Pre-trained models <https://github.com/LCAV/LenslessPiCam/blob/main/lensless/recon/model_dict.py>`__ that can be loaded from `Hugging Face <https://huggingface.co/bezzam>`__, for example in `this script <https://github.com/LCAV/LenslessPiCam/blob/main/scripts/recon/diffusercam_mirflickr.py>`__.
* Mask `design <https://lensless.readthedocs.io/en/latest/mask.html>`__ and `fabrication <https://lensless.readthedocs.io/en/latest/fabrication.html>`__ tools.
* `Simulation tools <https://lensless.readthedocs.io/en/latest/simulation.html>`__.
Expand Down
22 changes: 2 additions & 20 deletions configs/fine-tune_PSF.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# python scripts/recon/train_unrolled.py -cn fine-tune_PSF
# python scripts/recon/train_learning_based.py -cn fine-tune_PSF
defaults:
- train_unrolledADMM
- _self_
Expand All @@ -12,25 +12,7 @@ trainable_mask:

#Training
training:
save_every: 10
epoch: 50
crop_preloss: False
save_every: 1 # to see how PSF evolves

display:
gamma: 2.2

reconstruction:
method: unrolled_admm

pre_process:
network: UnetRes
depth: 2
post_process:
network: DruNet
depth: 4

optimizer:
slow_start: 0.01

loss: l2
lpips: 1.0
2 changes: 1 addition & 1 deletion configs/train_celeba_digicam_hitl.yaml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Learn mask with HITL training by setting measure configuration (set to null for learning in simulation)
#
# EXAMPLE COMMAND:
# python scripts/recon/train_unrolled.py -cn train_celeba_digicam_hitl measure.rpi_username=USERNAME measure.rpi_hostname=HOSTNAME files.vertical_shift=SHIFT
# python scripts/recon/train_learning_based.py -cn train_celeba_digicam_hitl measure.rpi_username=USERNAME measure.rpi_hostname=HOSTNAME files.vertical_shift=SHIFT

defaults:
- train_celeba_digicam
Expand Down
2 changes: 1 addition & 1 deletion configs/train_celeba_digicam_mask.yaml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# fine-tune mask for PSF, but don't re-simulate
# python scripts/recon/train_unrolled.py -cn train_celeba_digicam_mask
# python scripts/recon/train_learning_based.py -cn train_celeba_digicam_mask
defaults:
- train_celeba_digicam
- _self_
Expand Down
2 changes: 1 addition & 1 deletion configs/train_coded_aperture.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# python scripts/recon/train_unrolled.py -cn train_coded_aperture
# python scripts/recon/train_learning_based.py -cn train_coded_aperture
defaults:
- train_unrolledADMM
- _self_
Expand Down
7 changes: 4 additions & 3 deletions configs/train_digicam_celeba.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# python scripts/recon/train_unrolled.py -cn train_digicam_singlemask
# python scripts/recon/train_learning_based.py -cn train_digicam_celeba
defaults:
- train_unrolledADMM
- _self_
Expand All @@ -13,6 +13,7 @@ files:
huggingface_psf: "psf_simulated.png"
huggingface_dataset: True
split_seed: 0
test_size: 0.15
downsample: 2
rotate: True # if measurement is upside-down
save_psf: False
Expand All @@ -34,14 +35,14 @@ alignment:
random_vflip: False
random_hflip: False
quantize: False
# shifting when there is no files.downsample
# shifting when there is no files to downsample
vertical_shift: -117
horizontal_shift: -25

training:
batch_size: 4
epoch: 25
eval_batch_size: 4
eval_batch_size: 16
crop_preloss: True

reconstruction:
Expand Down
8 changes: 5 additions & 3 deletions configs/train_digicam_multimask.yaml
Original file line number Diff line number Diff line change
@@ -1,17 +1,18 @@
# python scripts/recon/train_unrolled.py -cn train_digicam_multimask
# python scripts/recon/train_learning_based.py -cn train_digicam_multimask
defaults:
- train_unrolledADMM
- _self_


torch_device: 'cuda:0'
device_ids: [0, 1, 2, 3]
eval_disp_idx: [1, 2, 4, 5, 9]


# Dataset
files:
dataset: bezzam/DigiCam-Mirflickr-MultiMask-25K
huggingface_dataset: True
huggingface_psf: null
downsample: 1
# TODO: these parameters should be in the dataset?
image_res: [900, 1200] # used during measurement
Expand Down Expand Up @@ -55,4 +56,5 @@ reconstruction:
post_process:
network : UnetRes # UnetRes or DruNet or null
depth : 4 # depth of each up/downsampling layer. Ignore if network is DruNet
nc: [32,64,116,128]
nc: [32,64,116,128]

6 changes: 4 additions & 2 deletions configs/train_digicam_singlemask.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# python scripts/recon/train_unrolled.py -cn train_digicam_singlemask
# python scripts/recon/train_learning_based.py -cn train_digicam_singlemask
defaults:
- train_unrolledADMM
- _self_
Expand All @@ -11,12 +11,13 @@ eval_disp_idx: [1, 2, 4, 5, 9]
files:
dataset: bezzam/DigiCam-Mirflickr-SingleMask-25K
huggingface_dataset: True
huggingface_psf: null
downsample: 1
# TODO: these parameters should be in the dataset?
image_res: [900, 1200] # used during measurement
rotate: True # if measurement is upside-down
save_psf: False

# extra_eval: null
extra_eval:
multimask:
huggingface_repo: bezzam/DigiCam-Mirflickr-MultiMask-25K
Expand All @@ -26,6 +27,7 @@ files:
topright: [80, 100] # height, width
height: 200

# TODO: these parameters should be in the dataset?
alignment:
# when there is no downsampling
topright: [80, 100] # height, width
Expand Down
24 changes: 0 additions & 24 deletions configs/train_pre-post-processing.yaml

This file was deleted.

10 changes: 6 additions & 4 deletions configs/train_psf_from_scratch.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
# python scripts/recon/train_unrolled.py -cn train_psf_from_scratch
# python scripts/recon/train_learning_based.py -cn train_psf_from_scratch
defaults:
- train_unrolledADMM
- _self_

# Train Dataset
files:
dataset: mnist # Simulated : "mnist", "fashion_mnist", "cifar10", "CelebA". Measure :"DiffuserCam"
huggingface_dataset: False
n_files: 1000
test_size: 0.15

celeba_root: /scratch/bezzam
downsample: 8

Expand All @@ -24,8 +28,6 @@ simulation:
object_height: 0.30

training:
crop_preloss: False # crop region for computing loss
batch_size: 8
epoch: 25
batch_size: 2
eval_batch_size: 16
save_every: 5
40 changes: 26 additions & 14 deletions configs/train_unrolledADMM.yaml
Original file line number Diff line number Diff line change
@@ -1,39 +1,52 @@
# python scripts/recon/train_unrolled.py
# python scripts/recon/train_learning_based.py
hydra:
job:
chdir: True # change to output folder


wandb_project: lensless
seed: 0
start_delay: null

# Dataset
files:
dataset: /scratch/bezzam/DiffuserCam_mirflickr/dataset # Simulated : "mnist", "fashion_mnist", "cifar10", "CelebA". Measure :"DiffuserCam"
celeba_root: null # path to parent directory of CelebA: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
psf: data/psf/diffusercam_psf.tiff
diffusercam_psf: True

huggingface_dataset: null
huggingface_psf: null
# -- using local dataset
# dataset: /scratch/bezzam/DiffuserCam_mirflickr/dataset # Simulated : "mnist", "fashion_mnist", "cifar10", "CelebA". Measure :"DiffuserCam"
# celeba_root: null # path to parent directory of CelebA: https://mmlab.ie.cuhk.edu.hk/projects/CelebA.html
# psf: data/psf/diffusercam_psf.tiff
# diffusercam_psf: True

# -- using huggingface dataset
dataset: bezzam/DiffuserCam-Lensless-Mirflickr-Dataset-NORM
huggingface_dataset: True
huggingface_psf: psf.tiff

# -- train/test split
split_seed: null # if null use train/test split from dataset

n_files: null # null to use all for both train/test
downsample: 2 # factor by which to downsample the PSF, note that for DiffuserCam the PSF has 4x the resolution
test_size: 0.15
test_size: null

# -- processing parameters
downsample: 2 # factor by which to downsample the PSF, note that for DiffuserCam the PSF has 4x the resolution
downsample_lensed: 2
input_snr: null # adding shot noise at input (for measured dataset) at this SNR in dB
vertical_shift: null
horizontal_shift: null
rotate: False
save_psf: False
crop: null
# vertical: null
# horizontal: null
image_res: null # for measured data, what resolution used at screen

extra_eval: null # dict of extra datasets to evaluate on

alignment: null
# topright: null # height, width
# height: null

torch: True
torch_device: 'cuda'
device_ids: null # for multi-gpu set list, e.g. [0, 1, 2, 3]
measure: null # if measuring data on-the-fly

# test set example to visualize at the end of every epoch
Expand Down Expand Up @@ -130,14 +143,13 @@ simulation:

training:
batch_size: 8
epoch: 50
epoch: 25
eval_batch_size: 10
metric_for_best_model: null # e.g. LPIPS_Vgg, null does test loss
save_every: null
#In case of instable training
skip_NAN: True
clip_grad: 1.0

crop_preloss: False # crop region for computing loss, files.crop should be set

optimizer:
Expand Down
14 changes: 14 additions & 0 deletions configs/train_unrolled_pre_post.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# python scripts/recon/train_learning_based.py -cn train_unrolled_pre_post
defaults:
- train_unrolledADMM
- _self_

reconstruction:
method: unrolled_admm

pre_process:
network: UnetRes
depth: 2
post_process:
network: UnetRes
depth: 2
36 changes: 20 additions & 16 deletions docs/source/dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,26 @@ or measured).
:special-members: __init__, __len__


Measured dataset objects
------------------------

.. autoclass:: lensless.utils.dataset.HFDataset
:members:
:special-members: __init__

.. autoclass:: lensless.utils.dataset.MeasuredDataset
:members:
:special-members: __init__

.. autoclass:: lensless.utils.dataset.MeasuredDatasetSimulatedOriginal
:members:
:special-members: __init__

.. autoclass:: lensless.utils.dataset.DiffuserCamTestDataset
:members:
:special-members: __init__


Simulated dataset objects
-------------------------

Expand All @@ -43,19 +63,3 @@ mask / PSF.
.. autoclass:: lensless.utils.dataset.SimulatedDatasetTrainableMask
:members:
:special-members: __init__


Measured dataset objects
------------------------

.. autoclass:: lensless.utils.dataset.MeasuredDataset
:members:
:special-members: __init__

.. autoclass:: lensless.utils.dataset.MeasuredDatasetSimulatedOriginal
:members:
:special-members: __init__

.. autoclass:: lensless.utils.dataset.DiffuserCamTestDataset
:members:
:special-members: __init__
Loading
Loading