Remove MaxAndArgmax Op#874
Remove MaxAndArgmax Op#874aerubanov wants to merge 3 commits intoaesara-devs:mainfrom aerubanov:maxandargmax_refactoring
MaxAndArgmax Op#874Conversation
brandonwillard
left a comment
There was a problem hiding this comment.
Looks like this is making good progress; much appreciated!
| def test_MaxAndArgmax(x, axes, exc): | ||
| g = aem.MaxAndArgmax(axes)(x) | ||
|
|
||
| if isinstance(g, list): | ||
| g_fg = FunctionGraph(outputs=g) | ||
| else: | ||
| g_fg = FunctionGraph(outputs=[g]) | ||
|
|
||
| cm = contextlib.suppress() if exc is None else pytest.warns(exc) | ||
| with cm: | ||
| compare_numba_and_py( | ||
| g_fg, | ||
| [ | ||
| i.tag.test_value | ||
| for i in g_fg.inputs | ||
| if not isinstance(i, (SharedVariable, Constant)) | ||
| ], | ||
| ) |
There was a problem hiding this comment.
We can add a new pytest.mark.parametrize and parameter that cycles through the two now independent Ops and runs the same tests.
| assert softmax_grad_legacy not in ops | ||
|
|
||
|
|
||
| def test_argmax_pushdown(): |
There was a problem hiding this comment.
It should be possible to refactor these so that they work with each Op separately.
| f([[0, 1, 2]]) | ||
|
|
||
|
|
||
| class TestMaxAndArgmax: |
There was a problem hiding this comment.
These tests need to be converted to work with the two Ops , as well.
| from tests.link.test_link import make_function | ||
|
|
||
|
|
||
| class TestMaxAndArgmax: |
|
@brandonwillard, thanks for the review! I will work on the requested changes. |
|
Hi @brandonwillard! I have the test that fails because |
There was a problem hiding this comment.
I just made some small fixes and rebased onto main. The errors I'm seeing now are due to the unfinished Numba implementations. Is that older, more cryptic error still present? (Nevermind, I see it in tests.tensor.test_math.TestMinMax.test_uint. I'll look into it.)
Since these commits need to be restructured/squashed anyway, it would be good to reorganize them so that the Max and ArgMax updates and their corresponding tests are added first (e.g. in a single commit or one for each Op along with their JAX and Numba implementations), then the removal/replacement of MaxAndArgmax in another commit.
|
@brandonwillard Thanks for the review! I'll be working on those changes. |
Codecov Report
@@ Coverage Diff @@
## main #874 +/- ##
==========================================
- Coverage 79.25% 79.20% -0.05%
==========================================
Files 152 152
Lines 47882 47882
Branches 10909 10906 -3
==========================================
- Hits 37949 37926 -23
- Misses 7436 7454 +18
- Partials 2497 2502 +5
|
|
Hi @brandonwillard ! I seem to be able to fix the errors that occurred after removing
|
| # at_max, | ||
| # at_min, |
There was a problem hiding this comment.
Why have the max and min functions been removed from TestLocalReduce?
There was a problem hiding this comment.
@brandonwillard I moved the MaxAndArgmax functionality to Max and now Max is implemented as a COp, so the tests for CAReduce will not work with Max.
| def R_op(self, inputs, eval_points): | ||
| raise ValueError("Argmax is not a differentiable operation") | ||
|
|
There was a problem hiding this comment.
| def R_op(self, inputs, eval_points): | |
| raise ValueError("Argmax is not a differentiable operation") |
There's already a grad implementation—albeit effectively non-functional—so I don't know if it helps to have an R_op like this. Also, I don't think a ValueError would be the best here.
If a gradient is undefined, I believe we should return an aesara.gradient.grad_undefined-generated NullType. Same with unimplemented gradients: we should use aesara.gradient.grad_not_implemented.
…ly.github.com> modify Max Co-authored-by: Brandon T. Willard <971601+brandonwillard@users.noreply.github.com>
Co-authored-by: Brandon T. Willard <971601+brandonwillard@users.noreply.github.com>
Co-authored-by: Brandon T. Willard <971601+brandonwillard@users.noreply.github.com>
|
@brandonwillard I removed |
|
@brandonwillard, just soft reminder) |
brandonwillard
left a comment
There was a problem hiding this comment.
Looks like one of the commit messages got scrambled.
| class Max(NonZeroCAReduce): | ||
| nfunc_spec = ("max", 1, 1) | ||
| class Max(COp): |
There was a problem hiding this comment.
Why aren't we using the NonZeroCAReduce base class for this?
There was a problem hiding this comment.
@brandonwillard Thank you for review. To fix this #874 (comment) problem with Max I coped the code from MaxAndArgmax and made it COp. Do you think that it can add some problems?
There was a problem hiding this comment.
It could interfere with existing code that assumes Max is a subclass of NonZeroCAReduce (e.g. rewrites, our JAX and/or Numba implementations, etc.)
Also, we generally want to make use of existing code. In this case, subclassing NonZeroCAReduce could make adding a C implementation much easier that it otherwise would be.
| MaxAndArgmax.debug = 0 | ||
| Argmax.debug = 0 | ||
| Max.debug = 0 |
There was a problem hiding this comment.
Hm, it looks like a piece of old code. I think I should to remove it.
Closes #765