-
Notifications
You must be signed in to change notification settings - Fork 163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify kernel dispatch paths for device reduce between CUB and c.parallel. #2591
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
🟨 CI finished in 2h 09m: Pass: 90%/210 | Total: 5d 20h | Avg: 40m 01s | Max: 1h 14m | Hits: 68%/13095
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
+/- | CUB |
Thrust | |
CUDA Experimental | |
pycuda | |
+/- | CCCL C Parallel Library |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
+/- | CUB |
+/- | Thrust |
CUDA Experimental | |
+/- | pycuda |
+/- | CCCL C Parallel Library |
🏃 Runner counts (total jobs: 210)
# | Runner |
---|---|
172 | linux-amd64-cpu16 |
16 | linux-arm64-cpu16 |
13 | linux-amd64-gpu-v100-latest-1 |
9 | windows-amd64-cpu16 |
🟨 CI finished in 49m 49s: Pass: 99%/210 | Total: 22h 17m | Avg: 6m 22s | Max: 26m 05s | Hits: 99%/16011
|
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
+/- | CUB |
Thrust | |
CUDA Experimental | |
pycuda | |
+/- | CCCL C Parallel Library |
Modifications in project or dependencies?
Project | |
---|---|
CCCL Infrastructure | |
libcu++ | |
+/- | CUB |
+/- | Thrust |
CUDA Experimental | |
+/- | pycuda |
+/- | CCCL C Parallel Library |
🏃 Runner counts (total jobs: 210)
# | Runner |
---|---|
172 | linux-amd64-cpu16 |
16 | linux-arm64-cpu16 |
13 | linux-amd64-gpu-v100-latest-1 |
9 | windows-amd64-cpu16 |
🟩 CI finished in 1h 44m: Pass: 100%/372 | Total: 5d 19h | Avg: 22m 32s | Max: 1h 15m | Hits: 57%/27963
|
Project | |
---|---|
+/- | CCCL Infrastructure |
libcu++ | |
+/- | CUB |
Thrust | |
CUDA Experimental | |
pycuda | |
+/- | CCCL C Parallel Library |
Modifications in project or dependencies?
Project | |
---|---|
+/- | CCCL Infrastructure |
+/- | libcu++ |
+/- | CUB |
+/- | Thrust |
+/- | CUDA Experimental |
+/- | pycuda |
+/- | CCCL C Parallel Library |
🏃 Runner counts (total jobs: 372)
# | Runner |
---|---|
298 | linux-amd64-cpu16 |
31 | linux-amd64-gpu-v100-latest-1 |
28 | linux-arm64-cpu16 |
15 | windows-amd64-cpu16 |
I think that this looks good. I believe I understand how you've inverted the control of the launch path to be under CUB. I will admit I'm only familiar with it from the cccl/c side. |
wmaxey
approved these changes
Oct 23, 2024
pciolkosz
pushed a commit
to pciolkosz/cccl
that referenced
this pull request
Oct 25, 2024
fbusato
pushed a commit
to fbusato/cccl
that referenced
this pull request
Nov 5, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
This PR removes the duplicated kernel dispatch logic from c.parallel's device reduce, adapts the CUB dispatch layer to support the CUDA driver +
CUfunction
use case, and then replaces the removed code in c.parallel with a call to the CUB dispatch layer.This is achieved by extending the list of arguments to
DispatchReduce
by two new template parameters:KernelSource
, which is a type which will be used by the dispatch layer to select the kernels to use; the C library will provide its own that returns the precompiled kernels, while the default kernel source will instantiate the kernels as previously; andKernelLauncherFactory
, which allows specifying by what method the kernels will be launched, as well as provides a method for obtaining occupancy information about the target device; the default one uses the same CUDA runtime functions as the original implementation, and the C library overrides this with one that uses CUDA driver functions directly.Resolves #2448; the issue suggests that
for
should also be unified, but the dispatch layer offor
is so thin that this is not worth the effort right now.Checklist