-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix CI test failures #55
Changes from all commits
e24e077
6410e9e
2e8fa17
0d3140f
2a85c31
715b316
eeadbdd
43f8221
ab17370
9e62a13
4d6fa8a
f642d7d
ce212f4
211d910
9cec54f
cefa2a8
cf4bf44
4b780fa
b0d5371
cd21b2d
f9cc6b4
332001b
a6da2a3
34aeb9e
850ec97
a67aa88
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3,6 +3,7 @@ on: | |
pull_request: | ||
paths: | ||
- .ci/* | ||
- test/test_gpu/* | ||
- tritonbench/* | ||
- .github/workflows/pr.yaml | ||
push: | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -10,7 +10,7 @@ | |
|
||
import torch | ||
|
||
from tritonbench.kernels.triton_fused_attention import attention as triton_attention | ||
from tritonbench.kernels.triton_fused_attention import attention_opt as triton_attention | ||
from tritonbench.utils.triton_op import ( | ||
BenchmarkOperator, | ||
BenchmarkOperatorMetrics, | ||
|
@@ -110,7 +110,7 @@ def triton_flash_v2( | |
triton_q, triton_k, triton_v = self.triton_preprocess(q, k, v) | ||
# full fp8 will be enabled if type of q,k,v is fp8 | ||
return lambda: triton_attention( | ||
triton_q, triton_k, triton_v, False, self.sm_scale | ||
triton_q, triton_k, triton_v, False, self.sm_scale, "base" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. cc @manman-ren There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry for the breakage. What is the error message? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @manman-ren here is the error message: https://github.com/pytorch-labs/tritonbench/actions/runs/11903695593/job/33171153429?pr=55. By default we are using the pytorch built-in Triton in the CI. |
||
) | ||
|
||
def get_x_val(self, _example_inputs) -> Tuple[int, int, int, int]: | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Liger kernels require triton package rather than pytorch-triton. I assume triton is not conflict with pytorch-triton because pytorch-triton doesn't cover
import triton
. I tested in local environment and it works well. but not sure if this is a safe way to do so.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are planning to have separate tests for triton main and pytorch-triton. Our docker has two conda environments, pytorch and triton-main, so that they can be tested in the same docker.
Right now, we are only deploying tests against pytorch-triton. We will setup the triton main config as
skip_tests_h100_triton_main.yaml
.