You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We had a reviewer recommend that on Kepler GPUs the global reduction tree might be more efficient if we replace the second kernel invocation with atomic operations.
To make this clear, consider that implementing a tree reduction in CUDA involves potentially 3 different granularities of parallelism:
Warp-level parallelism
Block-level parallelism
Kernel-level parallelism
Our current scheme uses a tree to perform warp-level parallelism, sync_threads() at the end of the primary kernel execution to aggregate values written to shared memory and then a second kernel to perform kernel-level parallelism.
This proposal is to either (a) replace the second kernel entirely by writing block-level reduction values into a common variable using atomic adds, or (b) replace both kernel and block-level by writing the result of a warp-level tree reduction directly into a global common variable using an atomic add.
The text was updated successfully, but these errors were encountered:
We had a reviewer recommend that on Kepler GPUs the global reduction tree might be more efficient if we replace the second kernel invocation with atomic operations.
To make this clear, consider that implementing a tree reduction in CUDA involves potentially 3 different granularities of parallelism:
Our current scheme uses a tree to perform warp-level parallelism, sync_threads() at the end of the primary kernel execution to aggregate values written to shared memory and then a second kernel to perform kernel-level parallelism.
This proposal is to either (a) replace the second kernel entirely by writing block-level reduction values into a common variable using atomic adds, or (b) replace both kernel and block-level by writing the result of a warp-level tree reduction directly into a global common variable using an atomic add.
The text was updated successfully, but these errors were encountered: