Skip to content

Conversation

joyang-nv
Copy link
Member

@joyang-nv joyang-nv commented Sep 26, 2025

What does this PR do ?

Add a one line overview of what this PR aims to accomplish.

Issues

List issues that this PR closes (syntax):

Usage

  • You can potentially add a usage example below
# Add a code snippet demonstrating how to use this

Before your PR is "Ready for review"

Pre checks:

  • Make sure you read and followed Contributor guidelines
  • Did you write any new necessary tests?
  • Did you run the unit tests and functional tests locally? Visit our Testing Guide for how to run tests
  • Did you add or update any necessary documentation? Visit our Document Development Guide for how to write, build and test the docs.

Additional Information

  • ...

Summary by CodeRabbit

  • Bug Fixes

    • Resolved inconsistent masking during training and log-prob computation across packed and non-packed sequences, improving accuracy and stability, including for multimodal inputs.
  • Performance

    • Streamlined masking logic to reduce overhead and improve efficiency in high-throughput attention paths.
  • Reliability

    • Unified masking behavior by delegating it to downstream components, reducing edge-case failures and ensuring consistent results across configurations.

@joyang-nv joyang-nv requested a review from a team as a code owner September 26, 2025 13:09
Copy link

⚠️ File Consistency Check

Check based on commit: bfc8f67 (PR #1213 from joyang/nemotron_custom_plan)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@joyang-nv joyang-nv added the CI:L1 Run doctests, unit tests, and functional tests label Sep 26, 2025
Copy link
Contributor

coderabbitai bot commented Sep 26, 2025

📝 Walkthrough

Walkthrough

Centralizes attention_mask handling in dtensor_policy_worker_v2 by removing per-branch mask construction and always passing attention_mask=None in both train and get_logprobs paths. No public API changes and no control-flow alterations beyond argument preparation.

Changes

Cohort / File(s) Summary
Attention mask handling consolidation
nemo_rl/models/policy/dtensor_policy_worker_v2.py
Removed creation of explicit attention_mask tensors in train/get_logprobs (sequence-packed and non-packed). Introduced a single assignment setting attention_mask=None and consistently pass None to model forward. Minor reorganization without altering overall flow.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant W as PolicyWorkerV2
  participant M as Model

  rect rgba(230,240,255,0.5)
  Note over W: Old (before)
  W->>W: Prepare inputs
  alt Non-packed
    W->>W: Build attention_mask = ones(...)
  else Packed / certain branches
    W->>W: attention_mask = None
  end
  W->>M: forward(inputs, attention_mask)
  M-->>W: outputs
  end

  rect rgba(235,255,235,0.5)
  Note over W: New (after)
  W->>W: Prepare inputs
  W->>W: attention_mask = None (centralized)
  W->>M: forward(inputs, attention_mask=None)
  M-->>W: outputs
  end
Loading

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

Possibly related PRs

Suggested labels

bug

Suggested reviewers

  • ffrujeri
  • ahmadki
  • terrykong

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Test Results For Major Changes ⚠️ Warning The PR removes all explicit attention_mask construction and now always passes None to downstream model calls in both training and log-probability paths, which likely impacts numerics and convergence behavior, yet the PR description contains no testing or validation evidence to demonstrate that accuracy or stability are unaffected. Consequently, the requirement that major or numerics-sensitive changes document test results is not satisfied. Please update the PR description with the relevant test results or convergence checks demonstrating that removing the attention masks does not introduce regressions, and include any necessary context such as datasets, configurations, and metrics so the verification can be re-evaluated.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title succinctly and accurately summarizes the change to default attention_mask handling by always using None, matching the primary updates in the PR.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch joyang/nemotron_custom_plan

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
nemo_rl/models/policy/dtensor_policy_worker_v2.py (2)

970-975: Fix stale/incorrect comment and typo (“causal”).

The comment refers to a prior “all-ones” mask while the code now passes None. Update to reflect the new behavior and correct the typo.

-                # DTensor requires the casual attention kernel to hit,
-                # yet our attention mask above is not always all 1s
-                # this is fine because we mask with the actual attention mask
-                # later, but for input it has to be all 1s
-                attention_mask = None
+                # We deliberately pass attention_mask=None to select the causal kernel path.
+                # Padding is applied later via post_attention_mask when computing token logprobs.
+                attention_mask = None

1226-1241: Unify score() to also pass attention_mask=None by default.

For consistency with train/get_logprobs and the PR intent, consider dropping the explicit all-ones mask here too.

-                    attention_mask = torch.ones(
-                        (batch_size, seq_len),
-                        dtype=torch.bool,
-                        device=input_ids.device,
-                    )
+                    attention_mask = None
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f521459 and bfc8f67.

📒 Files selected for processing (1)
  • nemo_rl/models/policy/dtensor_policy_worker_v2.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (2)
**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

**/*.py: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts

Files:

  • nemo_rl/models/policy/dtensor_policy_worker_v2.py
nemo_rl/**/*.py

📄 CodeRabbit inference engine (CODING_GUIDELINES.md)

nemo_rl/**/*.py: Do not set non-None configuration defaults in code; YAML is the single source of truth for defaults
Access required config attributes directly (e.g., policy_cfg["precision"]) and assume presence; do not introduce hidden defaults
Express configuration optionality via TypedDict using typing.NotRequired
When adding a new config key to a TypedDict subclass, document the key’s purpose, valid values/types, and recommended default in code
For any class or function decorated with @ray.remote, add '# pragma: no cover' on the class/def line (and on remote functions)

Files:

  • nemo_rl/models/policy/dtensor_policy_worker_v2.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build-container / main
  • GitHub Check: Lint check
  • GitHub Check: Lint check
  • GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (1)
nemo_rl/models/policy/dtensor_policy_worker_v2.py (1)

647-648: Passing attention_mask=None in train is reasonable; please verify VLM/back-compat.

Defaulting to None should select the causal kernel path and is correct for right-padded LM training (padding tokens don’t affect earlier tokens; loss masking handles pads). For seq-packing, FA2 cu_seqlens handles masking. Please sanity-check multimodal variants and any legacy models that might have depended on an explicit all-ones mask.

@joyang-nv joyang-nv marked this pull request as draft September 26, 2025 13:36
@joyang-nv joyang-nv force-pushed the joyang/nemotron_custom_plan branch from bfc8f67 to f3e63c6 Compare September 29, 2025 16:59
Copy link

⚠️ File Consistency Check

Check based on commit: f3e63c6 (PR #1213 from joyang/nemotron_custom_plan)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

Signed-off-by: Jonas Yang <[email protected]>
Copy link

⚠️ File Consistency Check

Check based on commit: 0ca37bf (PR #1213 from joyang/nemotron_custom_plan)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

Copy link

github-actions bot commented Oct 3, 2025

⚠️ File Consistency Check

Check based on commit: b2195e1 (PR #1213 from joyang/nemotron_custom_plan)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@joyang-nv joyang-nv force-pushed the joyang/nemotron_custom_plan branch from b2195e1 to 2333bbd Compare October 3, 2025 12:36
Copy link

github-actions bot commented Oct 3, 2025

⚠️ File Consistency Check

Check based on commit: 2333bbd (PR #1213 from joyang/nemotron_custom_plan)

⚠️ DTensor Policy Worker Synchronization Warning

The file nemo_rl/models/policy/dtensor_policy_worker_v2.py was modified in this PR, but nemo_rl/models/policy/dtensor_policy_worker.py was not updated.

Why this matters:
These files contain related DTensor policy worker implementations that should be kept synchronized to ensure consistency across different versions.

Action required:

  • Please review if the changes in nemo_rl/models/policy/dtensor_policy_worker_v2.py should also be applied to nemo_rl/models/policy/dtensor_policy_worker.py
  • Update nemo_rl/models/policy/dtensor_policy_worker.py if necessary to maintain consistency
  • If the files are intentionally different, please add a comment in the PR explaining why

Files to check:

  • Modified: nemo_rl/models/policy/dtensor_policy_worker_v2.py
  • Not modified: nemo_rl/models/policy/dtensor_policy_worker.py

This check ensures that related file implementations remain synchronized across the codebase. If you believe this warning is incorrect or the files should intentionally differ, please add a comment explaining the reasoning.

@joyang-nv joyang-nv closed this Oct 7, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI:L1 Run doctests, unit tests, and functional tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant