-
Notifications
You must be signed in to change notification settings - Fork 230
Add coverage automation #1340
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add coverage automation #1340
Conversation
|
Auto-sync is disabled for draft pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
|
|
/ok to test |
5 similar comments
|
/ok to test |
|
/ok to test |
|
/ok to test |
|
/ok to test |
|
/ok to test |
|
/ok to test |
|
Auto-sync is disabled for ready for review pull requests in this repository. Workflows must be run manually. Contributors can view more details about this message here. |
rparolin
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm.
The only thing that caught my eye was in the gh workflow yml it appears that we are requiring a node with a GPU. Do we need to run this coverage on a node that has a GPU attached or can it run on a lighter weight node and keep the GPU nodes available for other workloads?
This is deploying to the GitHub Pages branch at: https://nvidia.github.io/cuda-python/coverage/
This generates a combined coverage report for all of the tests in this repo (cuda_pathfinder, cuda_bindings and cuda_core), which is probably the right thing for a monorepo like this. We could, alternatively, create separate coverage reports for each of them.
All of these coverage tools are very sensitive to the
cwd, which is why the GHA config looks a bit funny in that regard. It seems like the solution is to install all of the packages, and then run the tests from the installation directory, which feels wrong, but I couldn't find any other combination (including path remapping, editable installs) that did the right thing to combine results from 3 test suites correctly and didn't cause the Cython coverage plugin to raise an exception.The biggest shortcoming is that we only run in one platform and hardware config, so it would miss anything that is platform or config specific. I think we can get to that, but it would involve some shuttling around of artifacts between different jobs and combining results from Linux and Windows is likely to be quite fiddly. I think it's still worth merging this as-is with that known issue.
I envision this being set up as a nightly job (with a manual trigger as well). It's pretty expensive, since you have to rebuild everything from source with a special Cython config and compiler flags, and then runtimes are also slower. Therefore, I don't think it's suitable for per-PR CI. There are nifty tools like coveralls or codecov.io that will do graphs with per-PR changes in coverage, but IME those are a bit noisy and full of false positives, even notwithstanding the slowness.