Skip to content

Commit

Permalink
Disable donated buffer when benchmarking
Browse files Browse the repository at this point in the history
  • Loading branch information
xuzhao9 committed Dec 2, 2024
1 parent c509d84 commit afa8e31
Showing 1 changed file with 5 additions and 0 deletions.
5 changes: 5 additions & 0 deletions tritonbench/operators/layer_norm/operator.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,11 @@ def torch_layer_norm(self, *args):

@register_benchmark()
def torch_compile_layer_norm(self, *args):
# We need to run backward multiple times for proper benchmarking
# so donated buffer have to be disabled
if self.mode == Mode.BWD or self.mode == Mode.FWD_BWD:
import torch._functorch.config
torch._functorch.config.donated_buffer = False
@torch.compile
def inner(*args):
return F.layer_norm(*args)
Expand Down

0 comments on commit afa8e31

Please sign in to comment.