-
Notifications
You must be signed in to change notification settings - Fork 146
feat: more numerically stable qwen custom plan #1235
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Signed-off-by: Terry Kong <[email protected]>
Signed-off-by: Terry Kong <[email protected]>
📝 WalkthroughWalkthroughIntroduces a new public configuration variable qwen_model_tp_plan_stable in examples/custom_parallel.py that defines a numerically stable tensor-parallel plan with adjusted per-layer layout settings. Existing custom_parallel_plan and imports remain unchanged. A note explains default plan instability and how to enable the new plan via policy.dtensor_cfg.custom_parallel_plan. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Pre-merge checks and finishing touches✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🧪 Early access (Sonnet 4.5): enabledWe are currently testing the Sonnet 4.5 model, which is expected to improve code review quality. However, this model may lead to increased noise levels in the review comments. Please disable the early access features if the noise level causes any inconvenience. Note:
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
examples/custom_parallel.py (1)
30-39
: Clarify what "increases" with larger TP.The phrase "numerically unstable and tend to increase with larger TP" is ambiguous—does instability increase, or errors/variance increase?
Consider rephrasing for clarity:
-- Default plans that keep attention output proj and mlp downproj RowwiseParallel are numerically - unstable and tend to increase with larger TP (e.g., TP >= 4). +- Default plans that keep attention output proj and mlp downproj RowwiseParallel exhibit numerical + instability that worsens with larger TP (e.g., TP >= 4).
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
examples/custom_parallel.py
(1 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CODING_GUIDELINES.md)
**/*.py
: Follow the Google Python Style Guide for all Python code
Target Python 3.12+ for all Python code in NeMo-RL
Indent Python code with 4 spaces; do not use tabs
Python filenames should be snake_case (e.g., some_file.py)
Class names should be PascalCase
Function and method names should be snake_case
Local variable names should be snake_case; if starting with a number, prefix with k (e.g., k_99th_percentile)
Global variables should be UPPER_SNAKE_CASE and prefixed with G_ (e.g., G_MY_GLOBAL)
Constants should be UPPER_SNAKE_CASE
Avoid shadowing variables declared in an outer scope
Initialize all externally visible members of a class in the constructor
For public interfaces used outside a file, prefer docstrings over comments
Use comments mainly for code within a function or interfaces local to a file
Commented-out code must include a nearby comment explaining usage and why it is commented out; otherwise remove before merging
Use Google-style docstrings for classes and functions (Sphinx-parseable)
Avoid using reflection when functionality can be easily achieved without it
Limit except clauses to the smallest specific set of exceptions possible
For duck-typing via try/except, keep the try body minimal and use else for main logic
Add the NVIDIA copyright header (with current year) at the top of all Python files, excluding tests/ and test-only scripts
Files:
examples/custom_parallel.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Lint check
- GitHub Check: Post automodel integration comment / Comment on PR
- GitHub Check: Post submodule check comment / Comment on PR
🔇 Additional comments (3)
examples/custom_parallel.py (3)
1-13
: LGTM!Copyright header follows NVIDIA standards with correct year.
29-29
: LGTM!Good visual separation between the two configuration blocks.
40-60
: Approve tensor-parallel plan configuration.The change to ColwiseParallel for
o_proj
anddown_proj
addresses numerical stability and restores TP=4 accuracy to the TP=1 baseline. Using snake_case forqwen_model_tp_plan_stable
aligns with existing examples (custom_parallel_plan
) and requires no renaming.
Signed-off-by: Terry Kong <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verified locally. Looks good to me!
closes #1227
Summary by CodeRabbit
New Features
qwen_model_tp_plan_stable
) for model execution. Enable viapolicy.dtensor_cfg.custom_parallel_plan
.Documentation