Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

memory issue with FBP followed by median filter #368

Closed
dkazanc opened this issue Jun 13, 2024 · 2 comments
Closed

memory issue with FBP followed by median filter #368

dkazanc opened this issue Jun 13, 2024 · 2 comments
Labels
bug Something isn't working memory-estimation GPU memory estimator related

Comments

@dkazanc
Copy link
Collaborator

dkazanc commented Jun 13, 2024

Possibly related to #365, but also can be a stand-alone issue. The following combination leads to OOM Cuda error for FBP on IFFT step (tried on 40Gb dataset on hopper).

FBP (httomolibgpu)
median_filter (httomolibgpu)

It happens mid-way through the blocks which might point that the memory with the median_filter step is somehow accumulated in block-iterations?

    FBP (httomolibgpu)
    median_filter (httomolibgpu)
     0%|          | 0/6 [00:00<?, ?block/s]
    17%|#6        | 1/6 [00:19<01:35, 19.07s/block]
    33%|###3      | 2/6 [00:36<01:11, 17.91s/block]
    50%|#####     | 3/6 [00:53<00:52, 17.54s/block]

Worth trying to reproduce the memory growth by running FBP followed by median filter in iterations locally.

@dkazanc dkazanc added bug Something isn't working memory-estimation GPU memory estimator related labels Jun 13, 2024
@dkazanc
Copy link
Collaborator Author

dkazanc commented Jun 14, 2024

Sorry missed the important part, the result of the reconstruction is not to be saved, e.g.:

- method: FBP
  module_path: httomolibgpu.recon.algorithm
  parameters:
    filter_freq_cutoff: 0.6
    center: ${{centering.side_outputs.centre_of_rotation}}
    recon_size: null
    recon_mask_radius: null
  save_result: false
- method: median_filter
  module_path: httomolibgpu.misc.corr
  parameters:
    kernel_size: 3
    dif: 0.0
    axis: auto 

@dkazanc
Copy link
Collaborator Author

dkazanc commented Aug 16, 2024

#417 resolves the issue

@dkazanc dkazanc closed this as completed Aug 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working memory-estimation GPU memory estimator related
Projects
None yet
Development

No branches or pull requests

1 participant