Skip to content

Conversation

@Karavil
Copy link
Contributor

@Karavil Karavil commented Jan 6, 2026

Problem

When using a "self-fetching components" pattern where each component fetches its own data via Zero, a page load can trigger 100-150+ queries simultaneously. These all get batched and sent to the server in one changeDesiredQueries message, which overwhelms the view-syncer.

How queries back up on the server

The view-syncer processes queries sequentially per clientGroupID due to a lock:

// view-syncer.ts
readonly #lock = new Lock();

async changeDesiredQueries(ctx, msg) {
  await this.#runInLockForClient(ctx, msg, this.#handleConfigUpdate);
}

When the client sends 150 queries in one message, they all get processed under one lock acquisition:

CLIENT                                 SERVER (view-syncer)
──────                                 ────────────────────

Page loads, 150 components mount
        │
        ▼
  ┌───────────┐       ONE message
  │ flushBatch│─────────────────────────►  lock acquired ONCE
  └───────────┘       (150 queries)              │
                                                 ▼
                                           ┌───────────┐
                                           │ Q1   50ms │
                                           │ Q2   50ms │
                                           │ ...       │  LOCK HELD
                                           │ Q150 50ms │  ENTIRE TIME
                                           └───────────┘
                                                 │
                                           ~7.5 seconds
                                                 │
                                           lock released

Our Workaround

This PR sends queries in separate messages so the server acquires/releases the lock for each:

CLIENT                                 SERVER (view-syncer)
──────                                 ────────────────────

  flushBatch adds to waiting queue
        │
        ├──── msg 1 (Q1) ────────────►  lock, process Q1, _maybe_ release
        ├──── msg 2 (Q2) ────────────►  lock, process Q2, _maybe_ release  
        ├──── msg 3 (Q3) ────────────►  lock, process Q3, _maybe_ release
        ├──── msg 4 (Q4) ────────────►  lock, process Q4, _maybe_ release
        ├──── msg 5 (Q5) ────────────►  lock, process Q5, *release*
        │     (at max in-flight, wait)
        │                                     │
        │  ◄──────────────────────────────────┘ Q1 responds
        │
        ├──── msg 6 (Q6) ────────────►  lock, process Q6
        ...

  Between each message, server releases lock.
  Pings have a chance to slip through.

By making sure some queries yield we're making sure that the lock isn't held completely.

Not Intended to Merge

This PR is a hacky client-side workaround, not intended for merging. But... what's the best way to prevent the view-syncer from blocking on large query batches?

Some options (not in any particular order):

  1. Server responds incrementally. Instead of waiting until all queries are processed, send data back query-by-query as each one completes. This could look jank, but works for our use case!

  2. Server-side timeout or "good enough" threshold. After N queries or M seconds, yield data before continuing. Even if it means queries take longer overall.

@vercel
Copy link

vercel bot commented Jan 6, 2026

@Karavil is attempting to deploy a commit to the Rocicorp Team on Vercel.

A member of the Team first needs to authorize it.

@Karavil Karavil force-pushed the feat/query-flight-control branch from f4967f3 to faedc5c Compare January 7, 2026 01:02
When self-fetching components mount simultaneously, they can send 100-150+
queries in a single changeDesiredQueries message. The view-syncer processes
this under a single lock, blocking pings for ~7.5s and causing client timeouts.

This patch limits concurrent in-flight queries to 5 and sends one query per
WebSocket message, allowing the server to release its lock between queries.

Key changes:
- Track in-flight queries with 2s timeout fallback
- Queue waiting queries, send as slots open
- Cancel put/del pairs (component unmounted before send)
- Send one query per message so server can process pings between queries
@Karavil Karavil force-pushed the feat/query-flight-control branch from faedc5c to 2640834 Compare January 7, 2026 01:19
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant