-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Dot and BatchedDot in PyTensor #878
Merged
Merged
Changes from 1 commit
Commits
Show all changes
13 commits
Select commit
Hold shift + click to select a range
ffad937
Added PyTorch link and unit tests for normal dot
HangenYuu 5121a85
Changed implementation of dot. Renamed tests
HangenYuu 2721c5a
Changed dot implementation
HangenYuu 03bb3a8
Reverted logic to correct scope for math.dot
HangenYuu 2cf0ed2
Reverted folder structure and added BatchedDot
HangenYuu fcb3b79
Merge branch 'main' into torch_dot
HangenYuu 307a3fb
Fixed minor typo in test naming
HangenYuu e2500bf
Merge branch 'torch_dot' of github.com:HangenYuu/pytensor into torch_dot
HangenYuu 143a75a
Fixed __init__.py file for tests to run
HangenYuu 2d74b31
Rewrite test to reuse pytorch function
HangenYuu 4deea70
Removed get_test_value
HangenYuu cab9db8
Changed variable names
HangenYuu f459866
Merge branch 'main' into torch_dot
HangenYuu File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
import torch | ||
|
||
from pytensor.link.pytorch.dispatch import pytorch_funcify | ||
from pytensor.tensor.math import Dot | ||
|
||
|
||
@pytorch_funcify.register(Dot) | ||
def pytorch_funcify_Dot(op, **kwargs): | ||
def dot(x, y): | ||
return torch.matmul(x, y) | ||
ricardoV94 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
return dot |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,30 @@ | ||
import numpy as np | ||
|
||
from pytensor.configdefaults import config | ||
from pytensor.graph.fg import FunctionGraph | ||
from pytensor.graph.op import get_test_value | ||
from pytensor.tensor.type import matrix, scalar, vector | ||
from tests.link.pytorch.test_basic import compare_pytorch_and_py | ||
|
||
|
||
def test_tensor_basics(): | ||
y = vector("y") | ||
y.tag.test_value = np.r_[1.0, 2.0].astype(config.floatX) | ||
x = vector("x") | ||
x.tag.test_value = np.r_[3.0, 4.0].astype(config.floatX) | ||
A = matrix("A") | ||
A.tag.test_value = np.array([[6, 3], [3, 0]], dtype=config.floatX) | ||
alpha = scalar("alpha") | ||
alpha.tag.test_value = np.array(3.0, dtype=config.floatX) | ||
beta = scalar("beta") | ||
beta.tag.test_value = np.array(5.0, dtype=config.floatX) | ||
|
||
# 1D * 2D * 1D | ||
out = y.dot(alpha * A).dot(x) + beta * y | ||
fgraph = FunctionGraph([y, x, A, alpha, beta], [out]) | ||
compare_pytorch_and_py(fgraph, [get_test_value(i) for i in fgraph.inputs]) | ||
|
||
# 2D * 2D | ||
out = A.dot(A * alpha) + beta * A | ||
fgraph = FunctionGraph([A, alpha, beta], [out]) | ||
compare_pytorch_and_py(fgraph, [get_test_value(i) for i in fgraph.inputs]) |
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have to import this file from
pytorch.dispatch.__init__
for it to be registered (the test is failing in the CI). ButDot
is not defined innlinalg
, so we should put it indispatch/match.py
? Same for the testThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I based it off the JAX link. If you take a look at pytensor/link/jax/dispatch/nlinalg.py you will see Max, Argmax, and Dot
Op
s from math in there. Do you want me to separate them out for JAX too?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can also put the Argmax I am implementing in
pytorch/dispatch/math.py
.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah in general we want to keep it more or less mirrored with the file structure where they are defined. Although our tensor/basic.py and tensor/math.py are in need of being split of as they have way too many lines