-
-
Notifications
You must be signed in to change notification settings - Fork 3k
perf: try to cache inner contexts of overloads #19408
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
perf: try to cache inner contexts of overloads #19408
Conversation
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
c7da595
to
a1f8c77
Compare
This comment has been minimized.
This comment has been minimized.
Now this is what I wanted to see: no behavior changes and This is not ready for code review, removing draft status to get some feedback on the idea itself. I do not particularly like using |
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need something along these lines, but it may be more complicated than this to maintain 100% compatibility with non-cached behavior. I suspect that we may need to consider the binder and possibly cache generated errors as well. There may be some other relevant state that is kind of implicit in the type checker. I can look into this in more detail and reason through all possible issues, but it will take some effort.
That's why I only added caching to Binder is indeed an implicit dependency, but it's still OK within a parent overload: re-accepting an I'll add the changes mentioned above in a few hrs, but frankly my main "wtf" here is using IDs as cache keys. |
According to mypy_primer, this change doesn't affect type check results on a corpus of open source code. ✅ |
Still quite nice in a single sample:
|
FWIW I am trying a (much) higher-level caching, like caching |
Improves #14978 and marginally improves nested overloads checking in general. This is not ready for review, but I would appreciate any in-progress feedback on the idea itself.
If I understand correctly, this change should not change any behavior at all:
infer_arg_types_in_empty_context
is almost pure.This change halves
colour
check time.Another very similar problem arises when checking overloaded binary operations (like in #14978), they do not translate directly into a series of overloads check but repeat the whole process for every node instead. Improving overload checking helps with that, but the result is still slow beyond reasonable. I'll try to push this further later in a separate PR.