Skip to content
Open
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
104 changes: 88 additions & 16 deletions plugin/src/utils/helpers.ts
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,79 @@ export const isPandaConfigFunction = (context: RuleContext<any, any>, name: stri
return imports.some(({ alias, mod }) => alias === name && mod === '@pandacss/dev')
}

/**
* Cache for data which is expensive to compute and can be reused while linting a file.
* This data can be shared across multiple rules.
*
* Only 1 file is linted at a time, so there's no need to store data for multiple files at the same time.
* This simple cache just holds the data for the current file. If `get` is called with `Context` of a different file,
* data for the new file will be computed and cached.
*
* For situations where the process is long-running (e.g. a language server), the cache will be cleared
* on the next micro-tick, to free the cached data and allow it to be garbage collected.
* This also ensures if the same file is linted again, it doesn't get stale data from the last run.
*/
class Cache<Data> {
// Filename of file for which data is cached
currentFilename: string | null
// Data for file whose filename is `currentFilename`
currentData: Data | null

// Whether a timer for resetting cache has been set
resetTimerSet: boolean

// Function to compute data for a file
compute: (context: RuleContext<any, any>) => Data

/**
* Create cache.
* @param compute - Function to compute data for a file
*/
constructor(compute: (context: RuleContext<any, any>) => Data) {
this.currentFilename = null
this.currentData = null
this.resetTimerSet = false

this.compute = compute
}

/**
* Get data for the file currently being linted.
* If data for this file is already cached, return it.
* Otherwise, compute data and cache it.
* @param context - ESLint Context object
* @returns Data for the file
*/
get(context: RuleContext<any, any>) {
if (context.filename === this.currentFilename) {
return this.currentData!
}
Comment on lines +111 to +114
Copy link

Copilot AI Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Keying the cache only by context.filename can return stale/incorrect data when multiple lint runs use the same filename string but different file contents within the same microtask (or when filename is a shared sentinel like <text>). Consider extending the cache key to include a content-identity signal such as context.sourceCode (object identity) or context.sourceCode.ast reference in addition to (or instead of) filename so that cache hits only occur for the same parsed source.

Copilot uses AI. Check for mistakes.
Copy link
Author

@overlookmotel overlookmotel Mar 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is quite complicated.

In the case of this plugin, I don't believe the situation described can happen.

ESLint CLI lints files one at a time. That process is entirely synchronous, but each file has a different filename, so that invalidates the cache when you move on to the next file.

Language server case is more complicated. This is addressed in the PR description above:

To make sure this plugin also works correctly in an IDE/language server context, where the same file can be linted over and over again as the user types or saves the file, the cache is also cleared after a micro-tick. So even if the file is edited and linted again, the 2nd lint run won't get stale data from the previous run.

This same point also applies for filename being a sentinel like <text>. ESLint CLI doesn't do this, only (to my knowledge) language servers do.

From my research (with help from Claude), all ESLint language servers that I'm aware of always have a microtick between linting different files.

  1. vscode-eslint (VS Code) - async handler with maxParallelism: 1. Always an async boundary between lint runs.
  2. vscode-langservers-extracted (Neovim, Helix, Zed, others) - This is vscode-eslint server extracted into a standalone npm package. Same code, same behavior.
  3. coc-eslint (Neovim via coc.nvim) - Forked from vscode-eslint. Same architecture.
  4. Emacs lsp-mode - Uses the vscode-eslint language server. Same code.
  5. efm-langserver (general purpose, Go) - Runs ESLint as an external process (often via eslint_d). Each lint is a separate process invocation, so ESLint's plugin state is completely fresh each time. No shared state between runs.
  6. eslint_d (daemon used by efm-langserver, null-ls, etc.) - Uses await eslint.execute() per connection. Async boundary between requests. Safe.

--

There is one circumstance in which linting the same file multiple times synchronously can occur. When applying autofixes, verifyAndFix() lints synchronously in a loop up to 10 times for the same file (but with different source code each time). In this case, the cache would be stale on 2nd pass, because there's no microtick in between passes.

But this plugin doesn't provide auto-fixes, only suggestions, so this circumstance doesn't arise.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#309 is an alternative implementation which you may find preferable, and supports this plugin offering auto-fixes in future if it chose to.


// Set timer to free data on next micro-tick, after this file has been linted
if (!this.resetTimerSet) {
queueMicrotask(resetCache.bind(null, this))
this.resetTimerSet = true
}

const data = this.compute(context)
this.currentFilename = context.filename
this.currentData = data
return data
}
}

/**
* Reset cache.
* This function is a pure function, defined outside the Cache class, to avoid it acting as a closure
* and hanging on to data which could otherwise be garbage collected.
* @param cache - Cache to reset
*/
function resetCache<T>(cache: Cache<T>) {
cache.currentFilename = null
cache.currentData = null
cache.resetTimerSet = false
}

const _getImports = (context: RuleContext<any, any>) => {
const specifiers = getImportSpecifiers(context)

Expand All @@ -78,16 +151,15 @@ const _getImports = (context: RuleContext<any, any>) => {
}

// Caching imports per context to avoid redundant computations
const importsCache = new WeakMap<RuleContext<any, any>, ImportResult[]>()
const importsCache = new Cache(_getFilteredImports)

export const getImports = (context: RuleContext<any, any>) => {
if (importsCache.has(context)) {
return importsCache.get(context)!
}
function _getFilteredImports(context: RuleContext<any, any>): ImportResult[] {
const imports = _getImports(context)
const filteredImports = imports.filter((imp) => syncAction('matchImports', getSyncOpts(context), imp))
importsCache.set(context, filteredImports)
return filteredImports
return imports.filter((imp) => syncAction('matchImports', getSyncOpts(context), imp))
}

export const getImports = (context: RuleContext<any, any>) => {
return importsCache.get(context)
}

const isValidStyledProp = <T extends Node>(node: T, context: RuleContext<any, any>) => {
Expand All @@ -106,7 +178,13 @@ export const isPandaIsh = (name: string, context: RuleContext<any, any>) => {
return syncAction('matchFile', getSyncOpts(context), name, imports)
}

const scopeAnalysisCache = new WeakMap<object, ReturnType<typeof analyze>>()
const scopeAnalysisCache = new Cache(_analyseScope)

function _analyseScope(context: RuleContext<any, any>): ReturnType<typeof analyze> {
return analyze(context.sourceCode.ast as TSESTree.Node, {
sourceType: 'module',
})
}

const findDeclaration = (name: string, context: RuleContext<any, any>) => {
try {
Expand All @@ -117,13 +195,7 @@ const findDeclaration = (name: string, context: RuleContext<any, any>) => {
return undefined
}

let scope = scopeAnalysisCache.get(src)
if (!scope) {
scope = analyze(src.ast as TSESTree.Node, {
sourceType: 'module',
})
scopeAnalysisCache.set(src, scope)
}
const scope = scopeAnalysisCache.get(context)

const decl = scope.variables
.find((v) => v.name === name)
Expand Down
Loading