-
Notifications
You must be signed in to change notification settings - Fork 313
fix: prevent divide-by-zero in calculate_ray_np when Ray cluster not ready #864
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: prevent divide-by-zero in calculate_ray_np when Ray cluster not ready #864
Conversation
…ready Add validation checks for cluster resources before performing divisions: - Check total_cpu and available_mem immediately after reading resources - Check total_gpu and available_gpu_mem inside use_cuda() block When Ray cluster is not properly initialized, ray_cpu_count() and ray_gpu_count() return 0, which previously caused ZeroDivisionError. Now raises RuntimeError with clear, actionable error messages. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <[email protected]>
Summary of ChangesHello @xyuzh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the robustness of the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses a potential ZeroDivisionError by adding validation checks for Ray cluster resources before they are used in calculations. The error messages are clear and should help users diagnose cluster setup issues. I've provided a couple of suggestions to consolidate error reporting, which would improve the user experience by presenting all resource-related problems at once. Additionally, I recommend adding unit tests to verify that the new RuntimeError exceptions are raised correctly when resources are unavailable, as outlined in your test plan. This will help ensure the fix is robust and prevent future regressions.
| if total_cpu == 0: | ||
| raise RuntimeError( | ||
| "Ray cluster has no CPU resources available (ray_cpu_count() returned 0). " | ||
| "This typically indicates the Ray cluster is not properly initialized. " | ||
| "Please ensure the Ray cluster has active worker nodes." | ||
| ) | ||
|
|
||
| if available_mem == 0: | ||
| raise RuntimeError( | ||
| "Ray cluster has no memory resources available. " | ||
| "Please verify the Ray cluster status with ray.cluster_resources()." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For a better user experience, it's good practice to collect all validation errors and report them together. This allows the user to see all resource issues at once, rather than fixing them one by one. You can accumulate error messages in a list and raise a single RuntimeError if any issues are found.
| if total_cpu == 0: | |
| raise RuntimeError( | |
| "Ray cluster has no CPU resources available (ray_cpu_count() returned 0). " | |
| "This typically indicates the Ray cluster is not properly initialized. " | |
| "Please ensure the Ray cluster has active worker nodes." | |
| ) | |
| if available_mem == 0: | |
| raise RuntimeError( | |
| "Ray cluster has no memory resources available. " | |
| "Please verify the Ray cluster status with ray.cluster_resources()." | |
| ) | |
| errors = [] | |
| if total_cpu == 0: | |
| errors.append( | |
| "Ray cluster has no CPU resources available (ray_cpu_count() returned 0). " | |
| "This typically indicates the Ray cluster is not properly initialized. " | |
| "Please ensure the Ray cluster has active worker nodes." | |
| ) | |
| if available_mem == 0: | |
| errors.append( | |
| "Ray cluster has no memory resources available. " | |
| "Please verify the Ray cluster status with ray.cluster_resources()." | |
| ) | |
| if errors: | |
| raise RuntimeError('\n'.join(errors)) |
| if total_gpu == 0: | ||
| raise RuntimeError( | ||
| f"Op[{op._name}] requires GPU but no GPUs are available in Ray cluster " | ||
| "(ray_gpu_count() returned 0). " | ||
| "Please ensure GPU nodes are configured in the Ray cluster." | ||
| ) | ||
| if available_gpu_mem == 0: | ||
| raise RuntimeError( | ||
| f"Op[{op._name}] requires GPU but no GPU memory is available. " | ||
| "Please verify GPU nodes are properly configured." | ||
| ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similar to the previous comment, it would be more user-friendly to check for all GPU-related resource issues at once and report them together. This provides a more comprehensive error message if both GPUs and GPU memory are unavailable.
| if total_gpu == 0: | |
| raise RuntimeError( | |
| f"Op[{op._name}] requires GPU but no GPUs are available in Ray cluster " | |
| "(ray_gpu_count() returned 0). " | |
| "Please ensure GPU nodes are configured in the Ray cluster." | |
| ) | |
| if available_gpu_mem == 0: | |
| raise RuntimeError( | |
| f"Op[{op._name}] requires GPU but no GPU memory is available. " | |
| "Please verify GPU nodes are properly configured." | |
| ) | |
| gpu_errors = [] | |
| if total_gpu == 0: | |
| gpu_errors.append("no GPUs are available in Ray cluster (ray_gpu_count() returned 0)") | |
| if available_gpu_mem == 0: | |
| gpu_errors.append("no GPU memory is available") | |
| if gpu_errors: | |
| error_details = " and ".join(gpu_errors) | |
| raise RuntimeError( | |
| f"Op[{op._name}] requires GPU but {error_details}. " | |
| "Please ensure GPU nodes are properly configured in the Ray cluster." | |
| ) |
Summary
calculate_ray_np()total_cpuandavailable_memimmediately after reading resourcestotal_gpuandavailable_gpu_meminsideuse_cuda()block for GPU operatorsWhen Ray cluster is not properly initialized,
ray_cpu_count()andray_gpu_count()return 0, which previously causedZeroDivisionError. Now raisesRuntimeErrorwith clear, actionable error messages.Test plan
🤖 Generated with Claude Code