Skip to content

Commit

Permalink
disabling colfax gemm for 12.4
Browse files Browse the repository at this point in the history
Summary:
TSIA

based on T203273285

Reviewed By: aakhundov, SamGinzburg

Differential Revision: D67057508

fbshipit-source-id: d4bd26cdeb76c09c915d55a2b64be0467d0ec667
  • Loading branch information
adamomainz authored and facebook-github-bot committed Dec 11, 2024
1 parent e20a1df commit 8fabcf4
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion tritonbench/operators/gemm/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -235,7 +235,7 @@ def hstu_triton_matmul(self, a, b, bias) -> Callable:
else:
return lambda: hstu_triton_matmul_kernel(a, b)

@register_benchmark(enabled=bool(colfax_gemm))
@register_benchmark(enabled=bool(colfax_gemm) and torch.version.cuda != "12.4")
def colfax_cutlass_matmul(self, a, b, bias) -> Callable:
assert colfax_gemm, f"colfax_gemm operator is not available."
if not bias == None:
Expand Down

0 comments on commit 8fabcf4

Please sign in to comment.