Is your feature request related to a problem? Please describe.
I have collected a bunch of CT volumes, each is stored in .nii.gz format and is of different resolutions (in depth, height and width). Due to the different resolutions, when i was trying to load them using pytorch's default DataLoader, i had to set the batch size to 1, which can be inefficient and cannot fully utilize the GPU memory.
Describe the solution you'd like
For efficient network training, i am expecting a certain class of Dataset/DataLoader that can load several volumes (with different resolutions) as a batch efficiently, and (randomly) crop them into the same-sized sub-volumes to feed in the network.
Are there any designs in MONAI that can address this?