Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chunk converter validator #3120

Merged
merged 2 commits into from
Sep 6, 2024
Merged

chunk converter validator #3120

merged 2 commits into from
Sep 6, 2024

Conversation

apbose
Copy link
Collaborator

@apbose apbose commented Aug 27, 2024

This validator is for the purpose of dynamic shape support

@github-actions github-actions bot added component: tests Issues re: Tests component: conversion Issues re: Conversion stage component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Aug 27, 2024
@github-actions github-actions bot requested a review from peri044 August 27, 2024 02:04
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2024-08-27 02:04:48.041542+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2024-08-27 02:05:14.002501+00:00
@@ -908,15 +908,20 @@
    meta_data = node.args[0].meta.get("tensor_meta")
    if meta_data is None:
        return False
    shape = meta_data.shape
    dynamic_shape = has_dynamic_shape(shape)
-    if (dynamic_shape):
+    if dynamic_shape:
        return False
    return True

-@dynamo_tensorrt_converter(torch.ops.aten.chunk.default, supports_dynamic_shapes=True, capability_validator=chunk_validator,)
+
+@dynamo_tensorrt_converter(
+    torch.ops.aten.chunk.default,
+    supports_dynamic_shapes=True,
+    capability_validator=chunk_validator,
+)
@enforce_tensor_types(
    {
        0: (TRTTensor,),
    }
)

@@ -903,7 +904,17 @@ def aten_ops_slice(
)


@dynamo_tensorrt_converter(torch.ops.aten.chunk.default)
def chunk_validator(node: Node) -> bool:
meta_data = node.args[0].meta.get("tensor_meta")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also check for the key val for the shape first, else check tensor_meta and if not return False

Comment on lines 912 to 913
dynamic_shape = has_dynamic_shape(shape)
if (dynamic_shape):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider using if has_dynamic_shape(shape) directly

@@ -51,6 +52,7 @@ def forward(self, input):
self.run_test(
TestChunk(),
input,
enable_passes=True,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Any reason we need the passes for this test ?

@apbose apbose force-pushed the chunk_dynamic_dim_validator branch 3 times, most recently from 16c4e8a to 958f1d1 Compare August 30, 2024 08:21
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2024-08-30 08:21:14.119560+00:00
+++ /home/runner/work/TensorRT/TensorRT/py/torch_tensorrt/dynamo/conversion/aten_ops_converters.py	2024-08-30 08:21:35.568075+00:00
@@ -690,11 +690,13 @@
    )


@dynamo_tensorrt_converter(
    torch.ops.aten.split.Tensor,
-    capability_validator= (has_static_shapes_in_args([0]) and has_static_shapes_in_args([1])),
+    capability_validator=(
+        has_static_shapes_in_args([0]) and has_static_shapes_in_args([1])
+    ),
    supports_dynamic_shapes=True,
)
@dynamo_tensorrt_converter(
    torch.ops.aten.split.sizes,
    capability_validator=has_static_shapes_in_args([1]),
--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py	2024-08-30 08:21:14.143560+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py	2024-08-30 08:21:36.876204+00:00
@@ -79,101 +79,102 @@
            TestChunk(),
            input,
            use_dynamo_tracer=True,
        )

+
#######################Dynamic cases################

-    # @parameterized.expand(
-    #     [
-    #         ((1,), (1,), (3,), 3, 0),
-    #         ((3,), (3,), (4,), 3, 0),
-    #         ((4,), (4,), (6,), 3, 0),
-    #         ((6,), (6,), (9,), 3, 0),
-    #         ((3,), (3,), (4,), 1, -1),
-    #         ((3,), (3,), (4,), 3, -1),
-    #         ((3,), (3,), (4,), 4, -1),
-    #     ]
-    # )
-    # def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim):
-    #     class TestChunk(torch.nn.Module):
-    #         def forward(self, input):
-    #             out = torch.ops.aten.chunk.default(input, chunks, dim)
-    #             return out
+# @parameterized.expand(
+#     [
+#         ((1,), (1,), (3,), 3, 0),
+#         ((3,), (3,), (4,), 3, 0),
+#         ((4,), (4,), (6,), 3, 0),
+#         ((6,), (6,), (9,), 3, 0),
+#         ((3,), (3,), (4,), 1, -1),
+#         ((3,), (3,), (4,), 3, -1),
+#         ((3,), (3,), (4,), 4, -1),
+#     ]
+# )
+# def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim):
+#     class TestChunk(torch.nn.Module):
+#         def forward(self, input):
+#             out = torch.ops.aten.chunk.default(input, chunks, dim)
+#             return out

-    #     input_specs = [
-    #         Input(
-    #             min_shape=min_shape,
-    #             opt_shape=opt_shape,
-    #             max_shape=max_shape,
-    #         ),
-    #     ]
-    #     self.run_test_with_dynamic_shape(
-    #         TestChunk(),
-    #         input_specs,
-    #         use_dynamo_tracer = True,
-    #     )
+#     input_specs = [
+#         Input(
+#             min_shape=min_shape,
+#             opt_shape=opt_shape,
+#             max_shape=max_shape,
+#         ),
+#     ]
+#     self.run_test_with_dynamic_shape(
+#         TestChunk(),
+#         input_specs,
+#         use_dynamo_tracer = True,
+#     )

-    # @parameterized.expand(
-    #     [
-    #         ((3, 4), (3, 4), (4, 4), 1, 0),
-    #         ((3, 4), (3, 4), (4, 4), 3, 0),
-    #         ((3, 4), (3, 4), (4, 4), 4, 0),
-    #         ((3, 4), (3, 4), (4, 4), 2, -2),
-    #         ((3, 4), (3, 4), (4, 4), 6, -2),
-    #         ((3, 4), (3, 4), (4, 4), 3, 1),
-    #         ((3, 4), (3, 4), (4, 4), 4, 1),
-    #         ((3, 4), (3, 4), (4, 4), 5, -1),
-    #     ]
-    # )
-    # def test_chunk_2D(self, min_shape, opt_shape, max_shape, chunks, dim):
-    #     class TestChunk(torch.nn.Module):
-    #         def forward(self, input):
-    #             out = torch.ops.aten.chunk.default(input, chunks, dim)
-    #             return out
+# @parameterized.expand(
+#     [
+#         ((3, 4), (3, 4), (4, 4), 1, 0),
+#         ((3, 4), (3, 4), (4, 4), 3, 0),
+#         ((3, 4), (3, 4), (4, 4), 4, 0),
+#         ((3, 4), (3, 4), (4, 4), 2, -2),
+#         ((3, 4), (3, 4), (4, 4), 6, -2),
+#         ((3, 4), (3, 4), (4, 4), 3, 1),
+#         ((3, 4), (3, 4), (4, 4), 4, 1),
+#         ((3, 4), (3, 4), (4, 4), 5, -1),
+#     ]
+# )
+# def test_chunk_2D(self, min_shape, opt_shape, max_shape, chunks, dim):
+#     class TestChunk(torch.nn.Module):
+#         def forward(self, input):
+#             out = torch.ops.aten.chunk.default(input, chunks, dim)
+#             return out

-    #     input_specs = [
-    #         Input(
-    #             min_shape=min_shape,
-    #             opt_shape=opt_shape,
-    #             max_shape=max_shape,
-    #         ),
-    #     ]
-    #     self.run_test_with_dynamic_shape(
-    #         TestChunk(),
-    #         input_specs,
-    #         use_dynamo_tracer = True,
-    #     )
+#     input_specs = [
+#         Input(
+#             min_shape=min_shape,
+#             opt_shape=opt_shape,
+#             max_shape=max_shape,
+#         ),
+#     ]
+#     self.run_test_with_dynamic_shape(
+#         TestChunk(),
+#         input_specs,
+#         use_dynamo_tracer = True,
+#     )

-    # @parameterized.expand(
-    #     [
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 0),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -3),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, 1),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, 1),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 6, -2),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 2),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -1),
-    #         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, -1),
-    #     ]
-    # )
-    # def test_chunk_3D(self, min_shape, opt_shape, max_shape, chunks, dim):
-    #     class TestChunk(torch.nn.Module):
-    #         def forward(self, input):
-    #             out = torch.ops.aten.chunk.default(input, chunks, dim)
-    #             return out
+# @parameterized.expand(
+#     [
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 0),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -3),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, 1),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, 1),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 6, -2),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 1, 2),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 3, -1),
+#         ((3, 4, 2), (3, 4, 2), (4, 4, 2), 4, -1),
+#     ]
+# )
+# def test_chunk_3D(self, min_shape, opt_shape, max_shape, chunks, dim):
+#     class TestChunk(torch.nn.Module):
+#         def forward(self, input):
+#             out = torch.ops.aten.chunk.default(input, chunks, dim)
+#             return out

-    #     input_specs = [
-    #         Input(
-    #             min_shape=min_shape,
-    #             opt_shape=opt_shape,
-    #             max_shape=max_shape,
-    #         ),
-    #     ]
-    #     self.run_test_with_dynamic_shape(
-    #         TestChunk(),
-    #         input_specs,
-    #         use_dynamo_tracer = True,
-    #     )
+#     input_specs = [
+#         Input(
+#             min_shape=min_shape,
+#             opt_shape=opt_shape,
+#             max_shape=max_shape,
+#         ),
+#     ]
+#     self.run_test_with_dynamic_shape(
+#         TestChunk(),
+#         input_specs,
+#         use_dynamo_tracer = True,
+#     )

if __name__ == "__main__":
    run_tests()

Copy link
Collaborator

@peri044 peri044 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this supports only static shapes now, is it better if we remove supports_dynamic_shapes=True ?

Comment on lines 84 to 97
#######################Dynamic cases################

# @parameterized.expand(
# [
# ((1,), (1,), (3,), 3, 0),
# ((3,), (3,), (4,), 3, 0),
# ((4,), (4,), (6,), 3, 0),
# ((6,), (6,), (9,), 3, 0),
# ((3,), (3,), (4,), 1, -1),
# ((3,), (3,), (4,), 3, -1),
# ((3,), (3,), (4,), 4, -1),
# ]
# )
# def test_chunk_1D(self, min_shape, opt_shape, max_shape, chunks, dim):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably it's better to add a unittest.skip similar to

@unittest.skip("Pending aten.split converter. Currently tested by E2E")
and add a link to the issue you created.

@apbose apbose force-pushed the chunk_dynamic_dim_validator branch 2 times, most recently from 994693c to cd5815b Compare September 4, 2024 21:54
@apbose apbose force-pushed the chunk_dynamic_dim_validator branch from cd5815b to 885046d Compare September 4, 2024 21:56
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are some changes that do not conform to Python style guidelines:

--- /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py	2024-09-04 21:57:10.847489+00:00
+++ /home/runner/work/TensorRT/TensorRT/tests/py/dynamo/conversion/test_chunk_aten.py	2024-09-04 21:57:33.075805+00:00
@@ -83,11 +83,11 @@
            use_dynamo_tracer=True,
        )


#######################Dynamic cases#######################
-#The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed
+# The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed
@unittest.skip("Pending aten.split converter. Currently tested by E2E")
class TestChunkDynamicConverter(DispatchTestCase):
    @parameterized.expand(
        [
            ((1,), (1,), (3,), 3, 0),

@@ -692,7 +692,9 @@ def aten_ops_softmax(

@dynamo_tensorrt_converter(
torch.ops.aten.split.Tensor,
capability_validator=has_static_shapes_in_args([1]),
capability_validator=(
has_static_shapes_in_args([0]) and has_static_shapes_in_args([1])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

has_static_shapes_in_args([0]) - Is this only necessary for chunk ? How was it passing before ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

args[0] is input and args[1] is split_size_or_sections. Initially I thought we should add this since dynamic input tensor split is not supported, but yes you are right, the error comes in torch.export before the interpretor stage, so having this validator does not make sense. We are anyways skipping the test so that CI does not fail. So will remove this.


#######################Dynamic cases#######################
#The tests are skipped for now. Will be addressed once https://github.com/pytorch/pytorch/issues/134663 is addressed
@unittest.skip("Pending aten.split converter. Currently tested by E2E")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does this message imply ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I think it was a misinterpretation, as in in an earlier comment I thought you wanted me to add this message. I will change this to @unittest.skip("Pending aten.split dynamic input torch.export guard bug. Issue- pytorch/pytorch#134663")

@apbose apbose force-pushed the chunk_dynamic_dim_validator branch 2 times, most recently from b6851bd to d92a6f5 Compare September 4, 2024 23:06
@apbose apbose force-pushed the chunk_dynamic_dim_validator branch from d92a6f5 to 988cabe Compare September 4, 2024 23:10
@github-actions github-actions bot added the component: converters Issues re: Specific op converters label Sep 4, 2024
@peri044 peri044 merged commit b4b22c3 into main Sep 6, 2024
66 of 67 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: converters Issues re: Specific op converters component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants