Skip to content

Conversation

@xyuzh
Copy link
Contributor

@xyuzh xyuzh commented Dec 22, 2025

Summary

  • Add validation checks for cluster resources before performing divisions in calculate_ray_np()
  • Check total_cpu and available_mem immediately after reading resources
  • Check total_gpu and available_gpu_mem inside use_cuda() block for GPU operators

When Ray cluster is not properly initialized, ray_cpu_count() and ray_gpu_count() return 0, which previously caused ZeroDivisionError. Now raises RuntimeError with clear, actionable error messages.

Test plan

  • Verify error is raised when Ray cluster has 0 CPUs
  • Verify error is raised when Ray cluster has 0 memory
  • Verify error is raised for GPU operators when cluster has 0 GPUs
  • Verify normal operation when cluster resources are available

🤖 Generated with Claude Code

…ready

Add validation checks for cluster resources before performing divisions:
- Check total_cpu and available_mem immediately after reading resources
- Check total_gpu and available_gpu_mem inside use_cuda() block

When Ray cluster is not properly initialized, ray_cpu_count() and
ray_gpu_count() return 0, which previously caused ZeroDivisionError.
Now raises RuntimeError with clear, actionable error messages.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <[email protected]>
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @xyuzh, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the robustness of the calculate_ray_np function by introducing comprehensive validation checks for Ray cluster resources. It proactively identifies scenarios where CPU, memory, or GPU resources are unavailable or zero, preventing ZeroDivisionError and instead raising clear RuntimeError messages that help users diagnose and resolve cluster initialization problems.

Highlights

  • Error Prevention: Implemented validation checks in the calculate_ray_np function to prevent ZeroDivisionError when Ray cluster resources (CPU, memory, GPU) are zero or unavailable.
  • Improved Error Messaging: Replaced potential ZeroDivisionError with more descriptive RuntimeError messages, providing clear, actionable guidance to users regarding cluster initialization issues.
  • Resource Validation: Added specific checks for total_cpu and available_mem at the function's start, and for total_gpu and available_gpu_mem within the use_cuda() block for GPU operators.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively addresses a potential ZeroDivisionError by adding validation checks for Ray cluster resources before they are used in calculations. The error messages are clear and should help users diagnose cluster setup issues. I've provided a couple of suggestions to consolidate error reporting, which would improve the user experience by presenting all resource-related problems at once. Additionally, I recommend adding unit tests to verify that the new RuntimeError exceptions are raised correctly when resources are unavailable, as outlined in your test plan. This will help ensure the fix is robust and prevent future regressions.

Comment on lines +247 to +258
if total_cpu == 0:
raise RuntimeError(
"Ray cluster has no CPU resources available (ray_cpu_count() returned 0). "
"This typically indicates the Ray cluster is not properly initialized. "
"Please ensure the Ray cluster has active worker nodes."
)

if available_mem == 0:
raise RuntimeError(
"Ray cluster has no memory resources available. "
"Please verify the Ray cluster status with ray.cluster_resources()."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For a better user experience, it's good practice to collect all validation errors and report them together. This allows the user to see all resource issues at once, rather than fixing them one by one. You can accumulate error messages in a list and raise a single RuntimeError if any issues are found.

Suggested change
if total_cpu == 0:
raise RuntimeError(
"Ray cluster has no CPU resources available (ray_cpu_count() returned 0). "
"This typically indicates the Ray cluster is not properly initialized. "
"Please ensure the Ray cluster has active worker nodes."
)
if available_mem == 0:
raise RuntimeError(
"Ray cluster has no memory resources available. "
"Please verify the Ray cluster status with ray.cluster_resources()."
)
errors = []
if total_cpu == 0:
errors.append(
"Ray cluster has no CPU resources available (ray_cpu_count() returned 0). "
"This typically indicates the Ray cluster is not properly initialized. "
"Please ensure the Ray cluster has active worker nodes."
)
if available_mem == 0:
errors.append(
"Ray cluster has no memory resources available. "
"Please verify the Ray cluster status with ray.cluster_resources()."
)
if errors:
raise RuntimeError('\n'.join(errors))

Comment on lines +286 to +296
if total_gpu == 0:
raise RuntimeError(
f"Op[{op._name}] requires GPU but no GPUs are available in Ray cluster "
"(ray_gpu_count() returned 0). "
"Please ensure GPU nodes are configured in the Ray cluster."
)
if available_gpu_mem == 0:
raise RuntimeError(
f"Op[{op._name}] requires GPU but no GPU memory is available. "
"Please verify GPU nodes are properly configured."
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the previous comment, it would be more user-friendly to check for all GPU-related resource issues at once and report them together. This provides a more comprehensive error message if both GPUs and GPU memory are unavailable.

Suggested change
if total_gpu == 0:
raise RuntimeError(
f"Op[{op._name}] requires GPU but no GPUs are available in Ray cluster "
"(ray_gpu_count() returned 0). "
"Please ensure GPU nodes are configured in the Ray cluster."
)
if available_gpu_mem == 0:
raise RuntimeError(
f"Op[{op._name}] requires GPU but no GPU memory is available. "
"Please verify GPU nodes are properly configured."
)
gpu_errors = []
if total_gpu == 0:
gpu_errors.append("no GPUs are available in Ray cluster (ray_gpu_count() returned 0)")
if available_gpu_mem == 0:
gpu_errors.append("no GPU memory is available")
if gpu_errors:
error_details = " and ".join(gpu_errors)
raise RuntimeError(
f"Op[{op._name}] requires GPU but {error_details}. "
"Please ensure GPU nodes are properly configured in the Ray cluster."
)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant