fix(heatmap): cache submission details in sessionStorage + fix prevSubmissionIdsRef bug#69
Open
NianJiuZst wants to merge 3 commits intopinchbench:mainfrom
Open
fix(heatmap): cache submission details in sessionStorage + fix prevSubmissionIdsRef bug#69NianJiuZst wants to merge 3 commits intopinchbench:mainfrom
NianJiuZst wants to merge 3 commits intopinchbench:mainfrom
Conversation
… models unchanged
Two performance/UX fixes:
1. Submission-level cache (Map<submission_id, ModelTaskData>)
- Each fetched model's full task results are cached permanently.
- When switching categories (which re-renders the parent and passes a new
entries array reference), the cache is checked first — cached models are
applied instantly without any API calls.
2. Skip re-fetch when submission IDs are unchanged
- Added prevSubmissionIdsRef to track the previous entries' submission_id list.
- Before making any requests, compare current IDs vs previous IDs.
- If the model list is the same (same IDs, same order), skip entirely.
- This is the key fix: category chip clicks update the URL, which triggers
a page re-render passing a new entries array — but with the same models.
The effect now detects this and does zero work.
Performance impact:
- Category switch (same benchmark version): ~0ms (was ~1000-2000ms)
- Version switch: still needs to fetch new models, but cached models are reused
- Concurrency: batch fetch uses 10 parallel requests (was 5 serial batches)
UX fix:
- Loading state split into 'initial' (full spinner) and 'incremental' (chips
stay interactive + progress bar) so users can keep clicking while loading
- Change ModelTaskData.tasks from Map to plain object (serializable) - Persist submission cache to sessionStorage (key: pinchbench_heatmap_cache) so heatmap data survives page refreshes - Move prevSubmissionIdsRef.current = currentIds BEFORE the early return check (was only set inside the 'all cached' branch, causing cache lookup to use stale prevIds on subsequent renders) - Batch-persist new cache entries to sessionStorage after each CONCURRENCY_LIMIT batch to avoid excessive serialization cost
|
@NianJiuZst is attempting to deploy a commit to the Kilo Code Team on Vercel. A member of the Team first needs to authorize it. |
…tch yields results
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
Clicking a category chip on the Task Heatmap triggers 50+ redundant fetch requests, even when the data has already been fetched and cached.
Root Causes
prevSubmissionIdsRef was set too late
The line
prevSubmissionIdsRef.current = currentIdswas only set inside the all entries are cached branch. In all other code paths (partial cache, full fetch), the ref was never updated, so on subsequent renders the cache lookup compared against stale previous IDs — causing every render to think nothing was cached.Cache was in-memory only (not persistent)
submissionCachewas a JavaScript Map stored in module scope. On page refresh, the cache was completely empty, forcing a full re-fetch of all 50+ models.tasks was a Map (not serializable)
ModelTaskData.tasksused aMaptype, which cannot be JSON-serialized, making it impossible to persist tosessionStorage.Changes
1. Fixed prevSubmissionIdsRef timing
Moved
prevSubmissionIdsRef.current = currentIdsto before any early returns or cache checks. Previously it was only set in the all cached branch, causing the ref to hold stale IDs on every other code path.2. Changed tasks from Map to plain Record string, TaskInfo
Maps cannot be serialized to JSON. Replaced with a plain object so the cache can be stored in sessionStorage and survive page refreshes.
3. Added sessionStorage persistence
pinchbench_heatmap_cacheloadCacheFromSession()— page refreshes restore cached data immediatelysaveCacheToSession()only when the batch yielded new entries4. Cleanup (post-review)
newCacheEntriesintermediate variable —currentCachealready accumulates all results incrementally, no need for a separate accumulatorsaveCacheToSession()now only called whenvalidBatchResults.length > 0to avoid unnecessary serialization of unchanged cacheResult