-
-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Kernel] Adding fused bias add to cutlass_scaled_mm_dq kernel #5390
Conversation
// Hopper | ||
cutlass_scaled_mm_dq_bias_sm90(c, a, b, a_scales, b_scales, bias); | ||
} else { | ||
assert(0 && "cutlass_scaled_mm_dq_bias only supports Hopper for now"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you please support Ada as well for FP8? Your test seems like it would be running for Ada
// KernelSchedule, EpilogueSchedule>>( | ||
// out, a, b, a_scales, b_scales); | ||
// } | ||
assert(0 && "kInt8 not supported in cutlass_scaled_mm_dq_bias yet"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use TORCH_CHECK with a message instead of assert
// Hopper | ||
cutlass_scaled_mm_dq_bias_sm90(c, a, b, a_scales, b_scales, bias); | ||
} else { | ||
assert(0 && "cutlass_scaled_mm_dq_bias only supports Hopper for now"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use TORCH_CHECK with a message instead of assert
Yeah, I agree that's the better way of doing things. This PR is a quick and slightly hacky way to test out the bias-add fusion. @tlrmchlsmth do you think it's better to add the bias add after merging #5391 ? |
Yeah, hopefully we can get #5391 landed quickly so as to not block this one :) |
99317df
to
1fc75ea
Compare
When #5391 is merged, would it make more sense for me to modify the C++ function signature of cutlass_scaled_mm to include bias, and add if/else condition for calling kernels with/without bias fusion? That should remove some redundancy at C++ level. |
Yeah, I think that's a great idea. I'm supportive of pushing the dispatching for bias vs no bias down to lower levels. In particular, I think we should try to do it at a lower level than |
b67662f
to
c44b682
Compare
Rebased to the new refactored scaled_mm code and got to a working version. More refactoring to be done with the cutlass kernel code. |
c44b682
to
4245231
Compare
Hey @cyang49, I've actually been working on adding bias/zero-point support to the epilogues for CUTLASS 2.x as well! I just posted the PR: #5560. I took some inspiration from yours as well (specifically using the |
@ProExpertProg thanks, I will take a look. There are a few other things at work I need to attend to early next week. It's likely that #5660 will be merged when I get back to finalizing this. |
Okay thanks for letting me know, I'll go ahead with mine first then and then help you land this one once you get around to it |
Closing in favor of #5560 |
This PR adds support of bias add to
cutlass_scaled_mm_dq
op, which is required for optimized performance of W8A8 linear layer with bias.I added the support for fp8, but not the int8 path yet, as my use case is fp8 only.
Microbenchmarking results show that it performs similarly to
cutlass_scaled_mm_dq
. Times are in microseconds (us). Columns represent different m (batch_size*seq_len). The kernels being compared are:torch_mm:f16
: fp16 pytorch (using cublas)cutlass_scaled_mm_bias0:f8e4
: originalcutlass_scaled_mm_dq
fp8 without bias add supportcutlass_scaled_mm_bias1:f8e4
: originalcutlass_scaled_mm_dq
fp8 with bias add supportcc @njhill
It would be helpful for the vLLM team to provide feedback on getting this PR accepted. Thanks!
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!