-
Notifications
You must be signed in to change notification settings - Fork 69
Lower stream-parallelized matmul #5302
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Review updated until commit 89f3173 Description
Changes walkthrough 📝
PR Reviewer Guide 🔍Here are some key observations to aid the review process:
|
|
!test |
|
|
||
| NVFUSER_DECLARE_CLONE_AND_CREATE | ||
|
|
||
| static ForLoop* createFromIterDomain(Val* index, IterDomain* iter_domain); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm on the fence about this. The method is coupled with the ForLoop class so I moved here to save some typing. The downside is less access control because createFromIterDomain could access private fields/methods of ForLoop.
|
!test |
|
!test |
nsarka
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, I just had a minor question
| } | ||
|
|
||
| std::vector<Val*> cloned_outs = ir_cloner.clone(group.outputs()); | ||
| // All expressions in the group are expected to be stream parallelized in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we enforce this constraint? If so is there an assertion somewhere?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't but we should. I'm waiting for a isResharding-like method to do that easily.
Priya2698
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delayed review, fell off my radar.
I have left some initial questions.
I am working on #5309, and hope to have a PR soon, which should unblock this PR.
|
|
||
| // Finds the stream IterDomain in the outputs of a segment. | ||
| IterDomain* findStreamIterDomain(const std::vector<Val*>& outs) { | ||
| for (auto* out : ir_utils::filterByType<TensorView>(outs)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we are finding the stream ID in any of the outputs of a segment? Why not use the above variation directly with any of the segment outputs as they must have mapped stream IDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because I'm not sure about CPU-scalar TensorViews from composite ops. But I should probably harden the check to enforce every TensorView to have a Stream IterDomain. Wdyt?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In their blackbox state, it does not look we can currently support SDPA ops, for example. So adding an assert makes sense to signal something is wrong. I guess this is something I need to fix in PropagateShardingsPass also.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In their blackbox state, it does not look we can currently support SDPA ops, for example.
Why not? At least, batch and/or head can be easily parallelized on stream without changing the implementation of the SDPA op, assuming ShardByStreams are added properly of course.
| auto* out = ops::newValLike(in, *in->getDataType())->as<TensorView>(); | ||
|
|
||
| TransformReplay::selfReplay(in->domain(), out->domain()); | ||
| // This is conservative and suboptimal. Consider reusing the algorithm in |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will resolved using #5316?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. It's one of the cases where out's contiguity ought to be different from in due to the slicing effect.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh okay, got it!
So in such cases the replay may in fact overwrite a correct contiguity as most users of selfReplay create the new TensorDomain using ops API, which sets the contiguity correctly. This is something we should consider for #5316.
|
!test |
For #5289