Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add dask-cudf and dask-cuda to Distributed environment #73

Merged
merged 1 commit into from
Sep 28, 2023

Conversation

charlesbluca
Copy link
Member

This should allow us to run a few of the tests getting skipped in Distributed's GPU CI, namely one added in dask/distributed#8148 that requires dask-cuda

@quasiben quasiben merged commit 3ec9c77 into main Sep 28, 2023
@quasiben quasiben deleted the distributed-add-rapids-deps branch September 28, 2023 20:39
@pentschev
Copy link
Member

Sorry, I was too late. I'm just wondering if we should actually be testing Dask-CUDA stuff here. Would it make more sense to add basic tests to Distributed's gpuCI and move the full tests to the Dask-CUDA repo?

@charlesbluca
Copy link
Member Author

Maybe, meant to leave some comments in #8148 around the fact that I would've expected cuDF spill logging to work with a standard LocalCluster, but I seemingly needed to use CUDAWorkers for things to get initialized properly - okay with migrating tests over to dask-cuda if you think things are stable enough in Distributed where we don't need to check on every PR, but will note that test_rmm_diagnostics.py also depends on dask-cuda

@pentschev
Copy link
Member

I understand your point, I'm myself over the wall in this situation. I think it would make more sense if we keep dependencies to a minimum here, testing against Dask-CUDA feels to me like we're trespassing a bit. I'm ok if people have different opinions and we should keep those tests here, I'd prefer preventing breaking gpuCI too much here, as it may start getting ignored by contributors when they see it's consistently failing.

@charlesbluca
Copy link
Member Author

That makes sense to me, I've triggered a rebuild already so can follow up here if I encounter any issues there or on subsequent GPU CI runs

@pentschev
Copy link
Member

Thanks Charles, just to be very clear, I don't want to dictate what we should do here. If there are diverging opinions and strong reasons to have it in here as well, I'm more than happy for us to continue discussing and shift my thinking. 🙂

@pentschev
Copy link
Member

Although, thinking a little bit further we want to have your cuDF PR tested. So let's add dask-cuda/dask-cudf for now and get your PR merged and discuss this matter again once that's in? Sorry for the back and forth.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants