Skip to content

Conversation

@bjnewman
Copy link

@bjnewman bjnewman commented Feb 13, 2026

Description

Hey, thanks for all the work on Vite — really appreciate it.

I was looking into #20084 and the stalled #20085, and wanted to pick up where @hanford left off. @sapphi-red asked for concrete benchmark numbers on that PR, so I focused on producing those alongside a minimal fix.

The problem

isCircularImport performs a DFS traversal on every call to cachedRequest, including for modules that have already been fully evaluated. Since cachedRequest is called once per import edge, the total cost of these traversals grows superlinearly with module count — our profiling shows roughly quadratic scaling behavior in practice. At 25K modules this adds up to tens of millions of graph traversals, most of which are unnecessary.

The fix

A 4-line early return: if a module is already evaluated (mod.evaluated && mod.promise), skip the circular check and return the cached result directly. Resolved promises can't deadlock, so the circular dependency check is unnecessary for fully-evaluated modules.

CPU profile (baseline, before fix)

Screenshot 2026-02-12 at 10 53 26 PM

CPU profiling with --cpu-prof shows isCircularImport consuming the majority of JavaScript execution time in the baseline. The flame graph above (via speedscope, Left Heavy view) shows the recursive DFS calls stacking up across the entire profile.

Benchmark results

Synthetic module graph benchmark (configurable N, avg 3 imports/module, 5% cycle density). We instrumented cachedRequest and isCircularImport to get exact call counts, then measured wall-clock time across multiple scales:

Modules Before After Speedup isCircularImport calls
100 3.9ms 3.0ms 1.3x 12.4K → 5.0K
500 33.4ms 18.5ms 1.8x 324K → 125K
1,000 123.8ms 56.4ms 2.2x 1.45M → 500K
2,000 420.4ms 187.3ms 2.2x 5.4M → 2.0M
5,000 2,650ms 1,065ms 2.5x 31.9M → 12.5M

Scaling is superlinear in both cases (doubling N gives 3-4x time), but the fix cuts absolute time by 2-2.5x across all scales. At the issue reporter's 25K module scale, this should meaningfully reduce the 60-90s overhead they reported.

Why this is safe

  • mod.evaluated = true is set in the finally block of cachedRequest only after directRequest completes
  • A resolved promise returns on the next microtick — no deadlock possible
  • The circular dependency check still runs for every module during its first evaluation (when cycles can actually cause deadlocks)

Test results

All 37 server-runtime tests pass (including all circular/cyclic import scenarios). Full unit, serve, and build test suites pass with zero failures.

Further optimization

The remaining superlinear behavior comes from the fact that each module's first evaluation still triggers a DFS traversal. Incremental cycle detection or memoized results could address this in a follow-up, but this minimal change provides meaningful improvement with very low risk.

Happy to help with anything else on this issue.

Fixes #20084

@sapphi-red sapphi-red added p2-nice-to-have Not breaking anything but nice to have (priority) feat: ssr labels Feb 13, 2026
@sapphi-red
Copy link
Member

Awesome!

Synthetic module graph benchmark (configurable N, avg 3 imports/module, 5% cycle density).

Could you share the script to create this benchmark so that I can try it out and ensure the results doesn't degrade in the future?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

feat: ssr p2-nice-to-have Not breaking anything but nice to have (priority)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

isCircularImport complexity impacting Vite performance

3 participants