Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc]: CUDAGraph captured generation stuck with custom_all_reduce and tensor_parallel=2 #5854

Open
nuzant opened this issue Jun 26, 2024 · 2 comments
Labels

Comments

@nuzant
Copy link

nuzant commented Jun 26, 2024

Anything you want to discuss about vllm.

Issue

I have been experimenting on CUDAGraph captured generation with my own transformer model implementation, using custom all-reduce in vLLM as replacement for pytorch all-reduce. CUDAGraph capturing worked well until I tried a certain parallel strategy (tensor parallel = pipeline parallel = data parallel = 2, 8 GPUs). In this configuration, the generation was randomly stuck when replaying the captured graph. This problem did not appear in any other parallel strategies with 8 GPUs. I wonder if anyone had encountered the same problem before? I observed that custom all-reduce use cross_device_reduce_1stage only when world_size=2 (and data of small sizes when world_size>2) instead of cross_device_reduce_2stage. Could this be the root cause of the problem? Thanks for your answers in advance!

@nuzant nuzant added the misc label Jun 26, 2024
@youkaichao
Copy link
Member

it's quite difficult to help custom usage of custom allreduce, I suggest asking @hanzhi713 for help, who originally contributed this code.

@hanzhi713
Copy link
Contributor

You might want to share a minimal reproducible code snippet. The stage selection behavior you mentioned is expected so that shouldn't be the problem.

Also, please try the following first and see if they still hang

  1. disable cuda graph but enable custom allreduce using your current strategy
  2. enable cuda graph but disable custom allreduce using your current strategy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants