-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adjust cpp torch trt logging level with compiler option #3181
Conversation
py/torch_tensorrt/dynamo/utils.py
Outdated
else: | ||
raise AssertionError(f"{level} is not valid log level") | ||
|
||
torch.ops.tensorrt.set_logging_level(int(log_level)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens in the case that it is a python only build? You might want to use enabled features to gate access to this code to only when the C++ runtime is available.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I see the problem. Updated the change.
AttributeError: '_OpNamespace' 'tensorrt' object has no attribute 'set_logging_level'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Description
Log level of python and native TRT logger is set differently. Same level from compiler option should be helpful to work with development version of torch-TRT module.
And profile option for c++ trt runtime module is enabled by debug level logging. This is helpful to measure/evaluate
performance with development version of torch-TRT module
Fixes # (issue)
Type of change
Please delete options that are not relevant and/or add your own.
Checklist: