Skip to content

FelixofRivia/pre-volumereco

Repository files navigation

pre-volumereco for GRAIN

Introduction

DUNE is a next-generation long-baseline neutrino experiment aiming to determine the neutrino mass ordering, study CP violation in the leptonic sector, observe supernova neutrinos, and search for physics beyond the Standard Model. It will feature a Near Detector 547 m from the source and a Far Detector ~1300 km away. Within the Near Detector, the SAND apparatus includes GRAIN (GRanular Argon for Interactions of Neutrinos), a novel liquid argon detector designed to image neutrino interactions via scintillation light, providing vertexing and tracking.

sand grain

3D imaging with scintillation light

An innovative cryogenic light readout system for GRAIN consists in a matrix of SiPMs with the optic realized on coded aperture masks (a grid of alternating opaque material and holes). The reconstruction algorithm, based on Maximum Likelihood Expectation-Maximization, combines the views of 60 cameras providing a three-dimensional map of the energy deposited by charged particles. This iterative approach represents a significant computational challenge because it requires optimized use on a computing system with multiple GPUs.

animated_mutrack

Goal of pre-volumereco

The goal of this project is to provide a prior of the expected three-dimensional energy deposition to serve as a seed for the MLEM algorithm, rather than a uniform distribution. This improves convergence and reduces GPU load.

Project structure

  • prepare_input_data.py converts output data from GRAIN Monte Carlo simulations into a ML-friendly data format (numpy arrays saved as .h5 file). Modules from sand-optical are used to read Monte Carlo data.
  • data/lightweight_dataset_20cm.h5 is a ML-ready dataset provided as an example.
  • train.ipynb is a notebook used to explore input data, train a deep neural network optimizing hyperparameters with OPTUNA, save the model and show some predictions.
  • saved_models/pre_volumereco_optuna_20cm.keras is an already trained model that can be used as an example, skipping the training step.

Dataset and predicitons

The event features are the average hit times in each one of camera, the event thruths are the voxelized energy deposits. Currently, given the limited available dataset it is necessary to consider large 20x20x20 $$\text{cm}^{3}$$ voxels.

features_vs_truth

pred_vs_reco

Model evaluation

The trained model was evaluated on 600 events as a prior for MLEM reconstruction, showing that likelihood convergence is reached approximately 20 iterations earlier compared to a uniform prior. Convergence is defined as a log likelihood change <50 between iterations.

likelihood

About

Deep learning for starting voxel distribution in volumereco algorithm for GRAIN

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages