Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Small perf improvements #1128

Closed

Conversation

Ch0ronomato
Copy link
Contributor

@Ch0ronomato Ch0ronomato commented Dec 15, 2024

Description

There was a warning about copying a tensor, this change removes that. In addition, until we decide to keep torch autograd going around, I disabled it here to free up a bit of memory during the runtime. This brought back about 80us back from the mean time in the best case.

Related Issue

Checklist

Type of change

  • New feature / enhancement
  • Bug fix
  • Documentation
  • Maintenance
  • Other (please specify):

📚 Documentation preview 📚: https://pytensor--1128.org.readthedocs.build/en/1128/

Copy link

codecov bot commented Dec 15, 2024

Codecov Report

Attention: Patch coverage is 83.33333% with 1 line in your changes missing coverage. Please review.

Project coverage is 82.10%. Comparing base (231a977) to head (8a3bf6a).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
pytensor/link/pytorch/dispatch/basic.py 0.00% 1 Missing ⚠️
Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #1128   +/-   ##
=======================================
  Coverage   82.10%   82.10%           
=======================================
  Files         185      185           
  Lines       48130    48133    +3     
  Branches     8669     8669           
=======================================
+ Hits        39519    39522    +3     
  Misses       6444     6444           
  Partials     2167     2167           
Files with missing lines Coverage Δ
pytensor/link/pytorch/linker.py 100.00% <100.00%> (ø)
pytensor/link/pytorch/dispatch/basic.py 94.49% <0.00%> (ø)

@ricardoV94
Copy link
Member

Torch autograd is on by default :O?

@ricardoV94
Copy link
Member

@@ -123,7 +123,10 @@ def arange(start, stop, step):
def pytorch_funcify_Join(op, **kwargs):
def join(axis, *tensors):
# tensors could also be tuples, and in this case they don't have a ndim
tensors = [torch.tensor(tensor) for tensor in tensors]
tensors = [
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this comprehension is needed at all when I tested in my PR

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Ch0ronomato
Copy link
Contributor Author

Is it though? https://dev-discuss.pytorch.org/t/how-does-torch-compile-work-with-autograd/1621/14

Ah - no it's not. When I did the autograd experiment, I had to turn it on for the tensors we cared about. We could even close this up

@Ch0ronomato
Copy link
Contributor Author

I'm gonna close this - I don't think there is much being changed given torch autograd is off in our case by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants