-
Notifications
You must be signed in to change notification settings - Fork 129
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Break MaxandArgmax Op to seperate TensorMax Op and Argmax Op #731
Conversation
fb47422
to
18085b8
Compare
1d9a484
to
a278272
Compare
Scalar problem solved Finalise changes to seperate MaxAndArgmax Op
a278272
to
8c29314
Compare
You can mark the test with |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #731 +/- ##
==========================================
+ Coverage 80.83% 80.87% +0.03%
==========================================
Files 162 163 +1
Lines 46862 46847 -15
Branches 11465 11463 -2
==========================================
+ Hits 37881 37887 +6
- Misses 6733 6747 +14
+ Partials 2248 2213 -35
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there, just some cleanup needed
6b07a6e
to
25c8b4b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small notes. Reverting the white space changes is useful, because git diff will then only show the files that have meaningful changes from this PR
pytensor/tensor/math.py
Outdated
def __getattr__(name): | ||
if name == "MaxandArgmax": | ||
warnings.warn( | ||
"The class `MaxandArgmax` has been deprecated. " | ||
"Call `Max` and `Argmax` seperately as an alternative.", | ||
FutureWarning, | ||
) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This has to raise some sort of error because MaxAndArgmax
no longer exist. It can't simply be a warning. I think it's better to just remove it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should we raise an AttributeError
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whatever error would be raised if this helper was not here, AttrubetError
sounds about right but confirm with trying without the special code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The helper also needs to raise the standard error for things other than MaxandArgmax
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Confirmed that AttributeError
is raised
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how we implement it in pymc for reference: https://github.com/pymc-devs/pymc/blob/f44071bdda363f743548187d3e124a027adfdb77/pymc/distributions/transforms.py#L54-L67
Although according to the PEP it should probably have a !r
in the standard AttributeError message: https://peps.python.org/pep-0562/#rationale
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In our case, we won't have to give a warning right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes in our case we just raise a more informative AttributeError
instead of the default one
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In your last change you made this useless, it's the same as if we didn't implement __getattr__
at all. The point of adding it is to have a custom message for the deprecated Op, but have the default behavior for everything else.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rectified this
25c8b4b
to
9ab88f4
Compare
9ab88f4
to
c992d3a
Compare
Almost there, looks great. Just 3 unresolved comments above |
c992d3a
to
25af747
Compare
tests/tensor/test_math.py
Outdated
@@ -1404,6 +1422,11 @@ def test_bool(self): | |||
assert np.all(i) | |||
|
|||
|
|||
def test_MaxAndArgmax_deprecated(): | |||
with pytest.raises(AttributeError): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add the specific message we expect for trying to access the MaxAndArgmax
with pytest.raises(AttributeError): | |
with pytest.raises(AttributeError, match=...): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added this
8be1cfd
to
cabb593
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome!
Some tests failing, it seems there are still some internal imports of |
cabb593
to
d953a0d
Compare
My bad. I rectified this. JAX tests are skipped locally so could not test them anywhere other than the CI. Hopefully, they'll pass now. |
Thanks @Dhruvanshu-Joshi ! |
Description
MaxandArgmax Op calculates both maximum and argmax together. With this PR, we aim to have seperate ops for the two operations.
Related Issue
MaxAndArgmax
Op
#334Checklist
Type of change