-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chunk converter validator #3120
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2024-08-27 02:04:48.041542+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2024-08-27 02:05:14.002501+00:00
@@ -908,15 +908,20 @@
meta_data = node.args[0].meta.get("tensor_meta")
if meta_data is None:
return False
shape = meta_data.shape
dynamic_shape = has_dynamic_shape(shape)
- if (dynamic_shape):
+ if dynamic_shape:
return False
return True
-@dynamo_tensorrt_converter(torch.ops.aten.chunk.default, supports_dynamic_shapes=True, capability_validator=chunk_validator,)
+
+@dynamo_tensorrt_converter(
+ torch.ops.aten.chunk.default,
+ supports_dynamic_shapes=True,
+ capability_validator=chunk_validator,
+)
@enforce_tensor_types(
{
0: (TRTTensor,),
}
)
@@ -903,7 +904,17 @@ def aten_ops_slice( | |||
) | |||
|
|||
|
|||
@dynamo_tensorrt_converter(torch.ops.aten.chunk.default) | |||
def chunk_validator(node: Node) -> bool: | |||
meta_data = node.args[0].meta.get("tensor_meta") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also check for the key val
for the shape first, else check tensor_meta
and if not return False
dynamic_shape = has_dynamic_shape(shape) | ||
if (dynamic_shape): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider using if has_dynamic_shape(shape) directly
@@ -51,6 +52,7 @@ def forward(self, input): | |||
self.run_test( | |||
TestChunk(), | |||
input, | |||
enable_passes=True, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason we need the passes for this test ?
16c4e8a
to
958f1d1
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2024-08-30 08:21:14.119560+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py 2024-08-30 08:21:35.568075+00:00
@@ -690,11 +690,13 @@
)
@dynamo_tensorrt_converter(
torch.ops.aten.split.Tensor,
- capability_validator= (has_static_shapes_in_args([0]) and has_static_shapes_in_args([1])),
+ capability_validator=(
+ has_static_shapes_in_args([0]) and has_static_shapes_in_args([1])
+ ),
supports_dynamic_shapes=True,
)
@dynamo_tensorrt_converter(
torch.ops.aten.split.sizes,
capability_validator=has_static_shapes_in_args([1]),
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py 2024-08-30 08:21:14.143560+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py 2024-08-30 08:21:36.876204+00:00
@@ -79,101 +79,102 @@
TestChunk(),
input,
use_dynamo_tracer=True,
)
+
#######################Dynamic cases################
- # @parameterized.expand(
- # [
- # ((1,), (1,), (3,), 3, 0),
- # ((3,), (3,), (4,), 3, 0),
- # ((4,), (4,), (6,), 3, 0),
- # ((6,), (6,), (9,), 3, 0),
- # ((3,), (3,), (4,), 1, -1),
- # ((3,), (3,), (4,), 3, -1),
- # ((3,), (3,), (4,), 4, -1),
- # ]
- # )
- # def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim):
- # class TestChunk(torch.nn.Module):
- # def forward(self, input):
- # out = torch.ops.aten.chunk.default(input, chunks, dim)
- # return out
+# @parameterized.expand(
+# [
+# ((1,), (1,), (3,), 3, 0),
+# ((3,), (3,), (4,), 3, 0),
+# ((4,), (4,), (6,), 3, 0),
+# ((6,), (6,), (9,), 3, 0),
+# ((3,), (3,), (4,), 1, -1),
+# ((3,), (3,), (4,), 3, -1),
+# ((3,), (3,), (4,), 4, -1),
+# ]
+# )
+# def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim):
+# class TestChunk(torch.nn.Module):
+# def forward(self, input):
+# out = torch.ops.aten.chunk.default(input, chunks, dim)
+# return out
- # input_specs = [
- # Input(
- # min_shape=min_shape,
- # opt_shape=opt_shape,
- # max_shape=max_shape,
- # ),
- # ]
- # self.run_test_with_dynamic_shape(
- # TestChunk(),
- # input_specs,
- # use_dynamo_tracer = True,
- # )
+# input_specs = [
+# Input(
+# min_shape=min_shape,
+# opt_shape=opt_shape,
+# max_shape=max_shape,
+# ),
+# ]
+# self.run_test_with_dynamic_shape(
+# TestChunk(),
+# input_specs,
+# use_dynamo_tracer = True,
+# )
- # @parameterized.expand(
- # [
- # ((3, 4), (3, 4), (4, 4), 1, 0),
- # ((3, 4), (3, 4), (4, 4), 3, 0),
- # ((3, 4), (3, 4), (4, 4), 4, 0),
- # ((3, 4), (3, 4), (4, 4), 2, -2),
- # ((3, 4), (3, 4), (4, 4), 6, -2),
- # ((3, 4), (3, 4), (4, 4), 3, 1),
- # ((3, 4), (3, 4), (4, 4), 4, 1),
- # ((3, 4), (3, 4), (4, 4), 5, -1),
- # ]
- # )
- # def test_chunk_2D(self, min_shape, opt_shape, max_shape, chunks, dim):
- # class TestChunk(torch.nn.Module):
- # def forward(self, input):
- # out = torch.ops.aten.chunk.default(input, chunks, dim)
- # return out
+# @parameterized.expand(
+# [
+# ((3, 4), (3, 4), (4, 4), 1, 0),
+# ((3, 4), (3, 4), (4, 4), 3, 0),
+# ((3, 4), (3, 4), (4, 4), 4, 0),
+# ((3, 4), (3, 4), (4, 4), 2, -2),
+# ((3, 4), (3, 4), (4, 4), 6, -2),
+# ((3, 4), (3, 4), (4, 4), 3, 1),
+# ((3, 4), (3, 4), (4, 4), 4, 1),
+# ((3, 4), (3, 4), (4, 4), 5, -1),
+# ]
+# )
+# def test_chunk_2D(self, min_shape, opt_shape, max_shape, chunks, dim):
+# class TestChunk(torch.nn.Module):
+# def forward(self, input):
+# out = torch.ops.aten.chunk.default(input, chunks, dim)
+# return out
- # input_specs = [
- # Input(
- # min_shape=min_shape,
- # opt_shape=opt_shape,
- # max_shape=max_shape,
- # ),
- # ]
- # self.run_test_with_dynamic_shape(
- # TestChunk(),
- # input_specs,
- # use_dynamo_tracer = True,
- # )
+# input_specs = [
+# Input(
+# min_shape=min_shape,
+# opt_shape=opt_shape,
+# max_shape=max_shape,
+# ),
+# ]
+# self.run_test_with_dynamic_shape(
+# TestChunk(),
+# input_specs,
+# use_dynamo_tracer = True,
+# )
- # @parameterized.expand(
- # [
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 0),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -3),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, 1),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, 1),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 6, -2),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 2),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -1),
- # ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, -1),
- # ]
- # )
- # def test_chunk_3D(self, min_shape, opt_shape, max_shape, chunks, dim):
- # class TestChunk(torch.nn.Module):
- # def forward(self, input):
- # out = torch.ops.aten.chunk.default(input, chunks, dim)
- # return out
+# @parameterized.expand(
+# [
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 0),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -3),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, 1),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, 1),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 6, -2),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 2),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -1),
+# ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, -1),
+# ]
+# )
+# def test_chunk_3D(self, min_shape, opt_shape, max_shape, chunks, dim):
+# class TestChunk(torch.nn.Module):
+# def forward(self, input):
+# out = torch.ops.aten.chunk.default(input, chunks, dim)
+# return out
- # input_specs = [
- # Input(
- # min_shape=min_shape,
- # opt_shape=opt_shape,
- # max_shape=max_shape,
- # ),
- # ]
- # self.run_test_with_dynamic_shape(
- # TestChunk(),
- # input_specs,
- # use_dynamo_tracer = True,
- # )
+# input_specs = [
+# Input(
+# min_shape=min_shape,
+# opt_shape=opt_shape,
+# max_shape=max_shape,
+# ),
+# ]
+# self.run_test_with_dynamic_shape(
+# TestChunk(),
+# input_specs,
+# use_dynamo_tracer = True,
+# )
if __name__ == "__main__":
run_tests()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since this supports only static shapes now, is it better if we remove supports_dynamic_shapes=True
?
#######################Dynamic cases################ | ||
|
||
# @parameterized.expand( | ||
# [ | ||
# ((1,), (1,), (3,), 3, 0), | ||
# ((3,), (3,), (4,), 3, 0), | ||
# ((4,), (4,), (6,), 3, 0), | ||
# ((6,), (6,), (9,), 3, 0), | ||
# ((3,), (3,), (4,), 1, -1), | ||
# ((3,), (3,), (4,), 3, -1), | ||
# ((3,), (3,), (4,), 4, -1), | ||
# ] | ||
# ) | ||
# def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably it's better to add a unittest.skip similar to
@unittest.skip("Pending aten.split converter. Currently tested by E2E") |
994693c
to
cd5815b
Compare
cd5815b
to
885046d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are some changes that do not conform to Python style guidelines:
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py 2024-09-04 21:57:10.847489+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py 2024-09-04 21:57:33.075805+00:00
@@ -83,11 +83,11 @@
use_dynamo_tracer=True,
)
#######################Dynamic cases#######################
-#The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed
+# The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed
@unittest.skip("Pending aten.split converter. Currently tested by E2E")
class TestChunkDynamicConverter(DispatchTestCase):
@parameterized.expand(
[
((1,), (1,), (3,), 3, 0),
@@ -692,7 +692,9 @@ def aten_ops_softmax( | |||
|
|||
@dynamo_tensorrt_converter( | |||
torch.ops.aten.split.Tensor, | |||
capability_validator=has_static_shapes_in_args([1]), | |||
capability_validator=( | |||
has_static_shapes_in_args([0]) and has_static_shapes_in_args([1]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
has_static_shapes_in_args([0]) - Is this only necessary for chunk ? How was it passing before ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
args[0] is input
and args[1] is split_size_or_sections
. Initially I thought we should add this since dynamic input tensor split is not supported, but yes you are right, the error comes in torch.export before the interpretor stage, so having this validator does not make sense. We are anyways skipping the test so that CI does not fail. So will remove this.
|
||
#######################Dynamic cases####################### | ||
#The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed | ||
@unittest.skip("Pending aten.split converter. Currently tested by E2E") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this message imply ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh I think it was a misinterpretation, as in in an earlier comment I thought you wanted me to add this message. I will change this to @unittest.skip("Pending aten.split dynamic input torch.export guard bug. Issue- pytorch/pytorch#134663")
b6851bd
to
d92a6f5
Compare
…mic shape check of input in validator
d92a6f5
to
988cabe
Compare
This validator is for the purpose of dynamic shape support