Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU reduction strategy using global atomics #38

Open
gilbo opened this issue Mar 24, 2015 · 0 comments
Open

GPU reduction strategy using global atomics #38

gilbo opened this issue Mar 24, 2015 · 0 comments

Comments

@gilbo
Copy link
Owner

gilbo commented Mar 24, 2015

We had a reviewer recommend that on Kepler GPUs the global reduction tree might be more efficient if we replace the second kernel invocation with atomic operations.

To make this clear, consider that implementing a tree reduction in CUDA involves potentially 3 different granularities of parallelism:

  • Warp-level parallelism
  • Block-level parallelism
  • Kernel-level parallelism

Our current scheme uses a tree to perform warp-level parallelism, sync_threads() at the end of the primary kernel execution to aggregate values written to shared memory and then a second kernel to perform kernel-level parallelism.

This proposal is to either (a) replace the second kernel entirely by writing block-level reduction values into a common variable using atomic adds, or (b) replace both kernel and block-level by writing the result of a warp-level tree reduction directly into a global common variable using an atomic add.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant