perf: replace WeakMap caches with simple single-file cache#308
perf: replace WeakMap caches with simple single-file cache#308overlookmotel wants to merge 4 commits intochakra-ui:mainfrom
WeakMap caches with simple single-file cache#308Conversation
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 1 out of 1 changed files in this pull request and generated 4 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| get(context: RuleContext<any, any>) { | ||
| if (context.filename === this.currentFilename) { | ||
| return this.currentData! | ||
| } |
There was a problem hiding this comment.
Keying the cache only by context.filename can return stale/incorrect data when multiple lint runs use the same filename string but different file contents within the same microtask (or when filename is a shared sentinel like <text>). Consider extending the cache key to include a content-identity signal such as context.sourceCode (object identity) or context.sourceCode.ast reference in addition to (or instead of) filename so that cache hits only occur for the same parsed source.
There was a problem hiding this comment.
This is quite complicated.
In the case of this plugin, I don't believe the situation described can happen.
ESLint CLI lints files one at a time. That process is entirely synchronous, but each file has a different filename, so that invalidates the cache when you move on to the next file.
Language server case is more complicated. This is addressed in the PR description above:
To make sure this plugin also works correctly in an IDE/language server context, where the same file can be linted over and over again as the user types or saves the file, the cache is also cleared after a micro-tick. So even if the file is edited and linted again, the 2nd lint run won't get stale data from the previous run.
This same point also applies for filename being a sentinel like <text>. ESLint CLI doesn't do this, only (to my knowledge) language servers do.
From my research (with help from Claude), all ESLint language servers that I'm aware of always have a microtick between linting different files.
- vscode-eslint (VS Code) -
asynchandler withmaxParallelism: 1. Always an async boundary between lint runs. - vscode-langservers-extracted (Neovim, Helix, Zed, others) - This is vscode-eslint server extracted into a standalone npm package. Same code, same behavior.
- coc-eslint (Neovim via coc.nvim) - Forked from vscode-eslint. Same architecture.
- Emacs lsp-mode - Uses the vscode-eslint language server. Same code.
- efm-langserver (general purpose, Go) - Runs ESLint as an external process (often via
eslint_d). Each lint is a separate process invocation, so ESLint's plugin state is completely fresh each time. No shared state between runs. - eslint_d (daemon used by efm-langserver, null-ls, etc.) - Uses
await eslint.execute()per connection. Async boundary between requests. Safe.
--
There is one circumstance in which linting the same file multiple times synchronously can occur. When applying autofixes, verifyAndFix() lints synchronously in a loop up to 10 times for the same file (but with different source code each time). In this case, the cache would be stale on 2nd pass, because there's no microtick in between passes.
But this plugin doesn't provide auto-fixes, only suggestions, so this circumstance doesn't arise.
There was a problem hiding this comment.
#309 is an alternative implementation which you may find preferable, and supports this plugin offering auto-fixes in future if it chose to.
The problem
This plugin caches 2 sets of information for each file in 2 x
WeakMaps.importsCache- keyed bycontextobject passed torule.create.scopeAnalysisCache- keyed bycontext.sourceCode.This presents 2 opportunities for optimization:
Firstly,
WeakMaps in JS are fairly slow. It's ideal to avoid them if possible.Secondly (and more significantly), the
contextobject passed to each rule is different, soimportsCacheonly caches per-rule per-file, not per-file. So it has a separate cache entry per rule - the cache is not shared between rules.17 rules use
isRecipeVariant, which callsgetImports, which uses the cache. If the user has all 17 rules enabled, calculating the imports may happen 17 times per file, when it only needs to happen once.This PR
This PR replaces these 2 x
WeakMaps with a simpler cache that holds only one piece of data at a time. It utilizes the fact that ESLint only lints one file at a time, and usesfilenameas the cache key.To make sure this plugin also works correctly in an IDE/language server context, where the same file can be linted over and over again as the user types or saves the file, the cache is also cleared after a micro-tick. So even if the file is edited and linted again, the 2nd lint run won't get stale data from the previous run.
In ESLint run as CLI, the entire linting process runs synchronously, so
resetCachewill only run once at the very end of the entire linting process.My ulterior motive
I'm not a Panda user myself, and you may wonder why I'm here.
I'm one of the maintainers of Oxlint, and we received a bug report saying that this plugin does not work in Oxlint. Obviously we wanted to rectify that - we would like Oxlint to support Panda users!
The root cause of the problem using this plugin in Oxlint is that Oxlint re-uses the same
contextobject over and over for each file. This is a performance optimization - we use object pooling wherever we can, as it reduces garbage collector pressure, leading to significantly faster lint times.However, this makes
contextobjects unsuitable to be used asWeakMapkeys - because the key is the same for every file. So it breaks the cache in this plugin.We'd prefer if at all possible to keep the object pooling optimization in Oxlint, so I thought I'd submit this PR to see if you'd be willing to consider altering the plugin to be Oxlint-compatible. As it happens, this change should also be a nice speed-up for the reasons discussed above (in both ESLint and Oxlint).
Here is the Oxlint issue, in case you're interested: oxc-project/oxc#19986. Please note that much of the root cause analysis provided in that issue by the OP appears to be erroneous / AI confusion - we believe the real explanation is the
WeakMapcaches.