Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: exclude refit sensitive ops from TRT compilation #3159

Merged
merged 8 commits into from
Sep 17, 2024
Merged

Conversation

peri044
Copy link
Collaborator

@peri044 peri044 commented Sep 12, 2024

Description

  1. Fix refit errors on certain ops by forcing them to fallback on pytorch. The global default for make_refittable is set to False even after this addition.
  2. Adds a capability validator for cumsum and embedding bag to let them fallback to pytorch incase user builds refittable engines
  3. Expose user compilation settings to ConverterRegistry
  4. Added a testcase to verify fallback
  5. The converter testcases for cumsum and embedding bag are intentionally marked as make_refittable=False. This is because in the future, we want make_refitable=True by default and hence these tests fail. So, explicitly marking these now will avoid those errors.

Fixes https://github.com/pytorch/TensorRT/actions/runs/10637055400/job/29490744653?pr=3131

Type of change

Please delete options that are not relevant and/or add your own.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

Checklist:

  • My code follows the style guidelines of this project (You can use the linters)
  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas and hacks
  • I have made corresponding changes to the documentation
  • I have added tests to verify my fix or my feature
  • New and existing unit tests pass locally with my changes
  • I have added the relevant labels to my PR in so that relevant reviewers are notified

@github-actions github-actions bot added component: tests Issues re: Tests component: api [Python] Issues re: Python API component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths labels Sep 12, 2024
@@ -317,6 +318,10 @@ def compile_module(
# Assume converters support dynamic shapes and disable validation
CONVERTERS.set_dynamic_shape_support(settings.assume_dynamic_shape_support)

# Set non-refitable ops as disallowed targets.
if settings.make_refitable:
settings.torch_executed_ops.update(REFIT_SENSITIVE_OPS)
Copy link
Collaborator

@narendasan narendasan Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think its better to do this through validators than maintaining another list and modifying user settings

Copy link
Collaborator Author

@peri044 peri044 Sep 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Currently, the capability validator doesn't have access to user settings. So, I think making the compilation settings in _compiler.py (compile call) a global variable and passing it here would work. https://github.com/pytorch/TensorRT/blob/main/py/torch_tensorrt/dynamo/conversion/_ConverterRegistry.py#L439

  2. The other alternative if we don't want to modify user settings is

CONVERTERS.set_disallowed_targets(settings.REFIT_SENSITIVE_OPS)

What would be preferred ?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dont see a drawback to let validators see user settings, could be relevant later like if a certain converter only works for one data type

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So it would involve updating the converter registry to take the settings as an arg in addition to the node, and some adjustment in the decorator. But overall I think its the best design since we shouldnt be spreading this info in a bunch of global lists, a converter should be able to tell you what it can and cant do on its own

Dont think we need to make settings global we could make a reference a member of the registry like we do the dynamic shape setting

assume_dynamic_shape_support: bool = False,
and then it gets injected into the call to the validator

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just an additional arg here:

if candidate.capability_validator(node) and (

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. modified it now.

@github-actions github-actions bot added the component: conversion Issues re: Conversion stage label Sep 16, 2024
Copy link
Collaborator

@narendasan narendasan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, make sure to mark needs cherrypick and ping @lanluo-nvidia

@peri044 peri044 merged commit 1e9aefe into main Sep 17, 2024
67 of 68 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla signed component: api [Python] Issues re: Python API component: conversion Issues re: Conversion stage component: dynamo Issues relating to the `torch.compile` or `torch._dynamo.export` paths component: tests Issues re: Tests needs-release-cherrypick
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants