-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix include_self for scatter_reduce #2090
base: main
Are you sure you want to change the base?
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #2090 +/- ##
==========================================
+ Coverage 72.28% 72.33% +0.04%
==========================================
Files 217 217
Lines 29097 29116 +19
Branches 3455 3460 +5
==========================================
+ Hits 21034 21060 +26
+ Misses 6935 6927 -8
- Partials 1128 1129 +1 ☔ View full report in Codecov by Sentry. |
@@ -14,6 +14,9 @@ | |||
import math | |||
from typing import Any, Optional, Sequence, Tuple, Union | |||
|
|||
import numpy as np | |||
import onnx.numpy_helper as onh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import onnx.numpy_helper as onh |
onnx helpers tend to be protobuf-y and slow. Consider using ir.tensor, ir.DataType and related methods instead"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, it just a tensor with one element. Can you tell how to replace by ir.Value then? I think I should use ir.tensor... The documentation should include an example where you build an onnx model from scrapt with the IR. That would be very helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
documentation
I agree. Need to find time to work on that
how to replace
Commented below. LMK if I can provide more info, thanks!
value = onh.from_array( | ||
np.array([np.finfo(src.dtype.numpy()).min], dtype=src.dtype.numpy()) | ||
) | ||
reduction_init = "min" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
value = onh.from_array( | |
np.array([np.finfo(src.dtype.numpy()).min], dtype=src.dtype.numpy()) | |
) | |
reduction_init = "min" | |
value = ir.tensor([np.finfo(src.dtype.numpy()).min], dtype=src.dtype) | |
reduction_init = "min" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here and elsewhere
@@ -7584,7 +7587,33 @@ def aten_scatter_reduce( | |||
self = op.Reshape(self, neg_1) | |||
index = op.Reshape(index, neg_1) | |||
src = op.Reshape(src, neg_1) | |||
|
|||
if not include_self: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you add a comment to clarify the logic here since I don't find it immediately apparent? Thanks!
value = onh.from_array(np.array([1], dtype=src.dtype.numpy())) | ||
reduction_init = "none" | ||
else: | ||
value = 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this need to be a tensor too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should. No test are validating that line.
value = onh.from_array(np.array([0], dtype=src.dtype.numpy())) | ||
reduction_init = "none" | ||
elif onnx_reduce == "mul": | ||
value = onh.from_array(np.array([1], dtype=src.dtype.numpy())) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
value = onh.from_array(np.array([1], dtype=src.dtype.numpy())) | |
value = ir.tensor([1], dtype=src.dtype) |
if not include_self: | ||
if onnx_reduce == "max": | ||
value = onh.from_array( | ||
np.array([np.finfo(src.dtype.numpy()).min], dtype=src.dtype.numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
E.g. could you add a comment on why we needed to use np.finfo min/max for this reduction type, for future readers?
# ONNX has not include_self parameter and default is include_self=True mode | ||
matcher=lambda sample: sample.kwargs.get("include_self") is False, | ||
reason="ONNX does't support include_self=False option", | ||
) | ||
.xfail( | ||
variant_name="amax", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should set dtypes=(torch.float16,),
as well
No description provided.