perf: replace WeakMap caches with faster mechanism#309
perf: replace WeakMap caches with faster mechanism#309overlookmotel wants to merge 2 commits intochakra-ui:mainfrom
WeakMap caches with faster mechanism#309Conversation
There was a problem hiding this comment.
Pull request overview
This PR replaces per-context WeakMap caches used during rule execution with single-entry caches that are shared across all rules, and adds lifecycle hooks to clear those caches once all rules finish linting a file (aimed at improving performance and compatibility with runners that reuse context objects).
Changes:
- Wraps
createRuleto callruleStarted()atcreate()andruleFinished()atProgram:exitto track active rules per file and trigger cache resets. - Replaces
importsCacheandscopeAnalysisCacheWeakMaps with module-level single-entry caches cleared after linting each file (plus a microtask safety net).
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
| plugin/src/utils/index.ts | Wraps the base rule creator to inject rule start/finish tracking into all rules. |
| plugin/src/utils/helpers.ts | Implements rule tracking and replaces WeakMap caches with single-entry caches reset after file linting. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| rule.create = (context) => { | ||
| ruleStarted() | ||
| const visitor = originalCreate(context) | ||
|
|
||
| const existingExit = visitor['Program:exit'] | ||
| if (existingExit) { | ||
| visitor['Program:exit'] = (node) => { | ||
| existingExit(node) | ||
| ruleFinished() | ||
| } | ||
| } else { | ||
| visitor['Program:exit'] = (_node) => ruleFinished() |
There was a problem hiding this comment.
rule.create is replaced with an arrow function and originalCreate is invoked as originalCreate(context). If ESLint (or another engine) calls create as a method (rule.create(context)), the original implementation would have received this === rule, but the current wrapper loses that binding. To preserve behavior/compatibility, use a normal function and call originalCreate with the same receiver (e.g. originalCreate.call(this, context) or originalCreate.call(rule, context)).
| rule.create = (context) => { | |
| ruleStarted() | |
| const visitor = originalCreate(context) | |
| const existingExit = visitor['Program:exit'] | |
| if (existingExit) { | |
| visitor['Program:exit'] = (node) => { | |
| existingExit(node) | |
| ruleFinished() | |
| } | |
| } else { | |
| visitor['Program:exit'] = (_node) => ruleFinished() | |
| rule.create = function (context) { | |
| ruleStarted() | |
| const visitor = originalCreate.call(this, context) | |
| const existingExit = visitor['Program:exit'] | |
| if (existingExit) { | |
| visitor['Program:exit'] = function (node) { | |
| existingExit.call(this, node) | |
| ruleFinished() | |
| } | |
| } else { | |
| visitor['Program:exit'] = function (_node) { | |
| ruleFinished() | |
| } |
There was a problem hiding this comment.
Rules are already wrapped using ESLintUtils.RuleCreator, which does not pass on this, so IMO there little gain in passing it on here.
plugin/src/utils/helpers.ts
Outdated
| export function ruleStarted() { | ||
| numRulesRunning++ | ||
| if (!resetMicrotaskScheduled) { | ||
| queueMicrotask(resetCachesMicrotask) | ||
| resetMicrotaskScheduled = true | ||
| } | ||
| } | ||
|
|
||
| export function ruleFinished() { | ||
| numRulesRunning-- | ||
| if (numRulesRunning === 0) { | ||
| resetCaches() | ||
| } | ||
| } | ||
|
|
||
| function resetCaches() { | ||
| cachedImports = null | ||
| cachedScopeAnalysis = null | ||
| } | ||
|
|
||
| function resetCachesMicrotask() { | ||
| resetCaches() | ||
| numRulesRunning = 0 | ||
| resetMicrotaskScheduled = false | ||
| } |
There was a problem hiding this comment.
The new global, single-entry caches + rule lifecycle tracking are a cross-cutting behavior change (shared across all rules and reset timing depends on createRule instrumentation). There are lots of rule tests in plugin/tests, but nothing appears to assert cache sharing/reset behavior across multiple rules/files or across repeated lint runs. Adding a focused regression test would help prevent subtle stale-cache bugs (especially in alternative runners like Oxlint).
|
Apologies for raising two PRs doing the same thing! Of course, understand if you don't have time to get into this. I hope you can understand my motivation - we're feeling out how to support a broad range of existing ESLint plugins in Oxlint, hopefully without compromising too much on the performance which differentiates us from ESLint. I've addressed Copilot's feedback, but the code could be cleaner. I'd be happy to clean it up, but I wonder if you could indicate at this stage whether you'd be willing in principle to consider either PR? |
Alternative to #308.
Same as #308, this removes the
WeakMapcachesimportsCacheandscopeAnalysisCache, and:importsCachebe shared across all rules (perf improvement).This PR has the same goal, but uses a different mechanism.
It alters
createRulewhich is used to wrap every rule to:ruleStarted()to the start ofcreate(start of linting a file).ruleFinished()at end of linting inProgram:exit(end of linting a file).ruleStarted()increments anumRulesRunningcounter,ruleFinished()decrements it. When the counter reaches zero, all rules have finished linting the file, and the caches are cleared.This version more robustly answers the critique posed in #308 (comment). It doesn't rely on assumption that language servers lint each file in a separate micro-tick (even though that's a safe assumption), and it will continue to work if this plugin decides to add auto-fixes to any rules in future.
#308 is a bit more self-contained, but this version IMO is easier to reason about (which is usually a good thing!)