Skip to content

Commit 7a56a7c

Browse files
angelayifacebook-github-bot
authored andcommitted
Change check to use torch.compiler.is_compiling()
Summary: Fixes D82792378, where we were creating an op in the graph while unflattening the jaggedtensor I think we should just change all the is_torchdynamo_compiling to torch.compiler.is_compiling since they are gating similar behavior/assumptions in PT2 Reviewed By: TroyGarden Differential Revision: D82837368 fbshipit-source-id: 63fa1aff39207b8e3dc1d548b9165f595bfccf65
1 parent eb21763 commit 7a56a7c

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torchrec/sparse/jagged_tensor.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1066,7 +1066,7 @@ def _jt_flatten_spec(t: JaggedTensor, spec: TreeSpec) -> List[Optional[torch.Ten
10661066
def _assert_tensor_has_no_elements_or_has_integers(
10671067
tensor: Optional[torch.Tensor], tensor_name: str
10681068
) -> None:
1069-
if is_torchdynamo_compiling() or tensor is None:
1069+
if torch.compiler.is_compiling() or tensor is None:
10701070
# Skipping the check tensor.numel() == 0 to not guard on pt2 symbolic shapes.
10711071
# TODO(ivankobzarev): Use guard_size_oblivious to pass tensor.numel() == 0 once it is torch scriptable.
10721072
return

0 commit comments

Comments
 (0)