-
-
Notifications
You must be signed in to change notification settings - Fork 2
feat: optimize Mesh V2 event queue to prevent bloating #76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
feat: optimize Mesh V2 event queue to prevent bloating #76
Conversation
- Added MAX_EVENT_QUEUE_SIZE (100) limit to eventQueue - Implemented deduplication for events with same name and payload - Updated processNextBroadcast to process all due events in one frame - Added statistics tracking for skipped/dropped events - Updated unit tests to match new behavior Addressing smalruby/smalruby3-gui#500 Co-Authored-By: Gemini <noreply@google.com>
Excellent Implementation! ✅This PR perfectly implements the event queue optimization as specified in Issue scratchfoundation#500. All requirements have been met with clean, well-tested code. ✅ Implementation Quality1. Sender-side: Deduplication + Size Limit (mesh-service.js:815-848)Deduplication Logic: const isDuplicate = this.eventQueue.some(item =>
item.eventName === eventName && item.payload === payload
);
Size Limit Logic: if (this.eventQueue.length >= this.MAX_EVENT_QUEUE_SIZE) {
const dropped = this.eventQueue.shift(); // FIFO
this.eventQueueStats.dropped++;
if (this.eventQueueStats.dropped % 10 === 1) {
log.warn(...);
}
}
Statistics Reporting: reportEventStatsIfNeeded () {
const now = Date.now();
const elapsed = now - this.eventQueueStats.lastReportTime;
if (elapsed >= 10000 && ...) {
log.info(`Mesh V2: Event Queue Stats (last ${(elapsed / 1000).toFixed(1)}s): ...`);
// Reset counters
}
}
2. Receiver-side: Batch Processing (mesh-service.js:496-548)Before: // 1 event per frame (slow)
const {event, offsetMs} = eventsToProcess[0];
this.broadcastEvent(event);After: // All events in one frame (fast)
eventsToProcess.forEach(({event, offsetMs}) => {
log.info(`Mesh V2: Broadcasting event: ${event.name} ...`);
this.broadcastEvent(event);
this.lastBroadcastOffset = offsetMs;
});Changes:
Performance Impact:
3. Test CoverageNew Unit Tests (test/unit/mesh_service_v2.js):
Updated Integration Test (test/unit/mesh_service_v2_integration.js:112): service.MAX_EVENT_QUEUE_SIZE = 2000;
📊 Performance AnalysisMemory Usage
Processing Speed
Event Throughput
✅ Verification Checklist
🚀 RecommendationLGTM - Ready to merge! This optimization is critical for preventing memory bloat and improving performance. The implementation is clean, well-tested, and follows best practices. 📝 Suggested Next Steps
Reviewed-By: Claude Sonnet 4.5 noreply@anthropic.com |
- Updated processNextBroadcast to process only events within a 33ms window of event-time per frame - Prevents main thread spikes during large event backlogs - Added unit tests for window splitting and simultaneous event processing 🤖 Generated with [Gemini Code](https://gemini.google.com/code) Co-Authored-By: Gemini <noreply@google.com>
Excellent Additional Optimization! ✅The 33ms window limitation is a brilliant addition that prevents main thread freezing while maintaining responsive event processing. ✅ 33ms Window Implementation (mesh-service.js:517-527)// 1フレーム(33ms)のウィンドウ制限を適用
if (windowBase === null) {
windowBase = offsetMs;
} else if (offsetMs >= windowBase + 33) {
log.debug(`Mesh V2: Window limit reached (33ms). ` +
`Remaining events will be processed in next frames.`);
break;
}Why 33ms?
How it works:
📊 Performance AnalysisScenario 1: No Backlog (Real-time)
Scenario 2: Small Backlog (100ms behind)
Scenario 3: Large Backlog (1000ms behind)
Scenario 4: Simultaneous Events (50 events at same time)
✅ Test CoverageTest 1: Normal Case (lines 163-200) // Events: 0ms, 0.001ms, 0.002ms
// All within 33ms → all processed in 1 frame
st.equal(broadcasted.length, 3);✅ Verifies basic batch processing still works Test 2: 33ms Window Splitting (lines 204-250) // Events: 0ms, 20ms, 40ms, 60ms
// Frame 1: 0-33ms → e1, e2 (2 events)
// Frame 2: 40-73ms → e3, e4 (2 events)
st.equal(broadcasted.length, 2); // Frame 1
st.equal(broadcasted.length, 4); // Frame 2✅ Verifies window splitting works correctly Test 3: Simultaneous Events (lines 252-292) // 50 events all at 0ms
// All within 33ms window → all in 1 frame
st.equal(broadcasted.length, 50);✅ Verifies burst handling capability 🎯 Benefits
🔬 Example: 1000 Event BacklogBefore (unlimited per frame):
After (33ms window):
✅ Code Quality
🚀 RecommendationLGTM - Excellent optimization! This is exactly the right approach for handling event backlogs without freezing the UI. The 33ms window provides:
Reviewed-By: Claude Sonnet 4.5 noreply@anthropic.com |
- Removed trailing 'r' typo in mesh-service.js - Shortened long test name in mesh_service_v2.js - Fixed multiple spaces in mesh_service_v2.js 🤖 Generated with [Gemini Code](https://gemini.google.com/code) Co-Authored-By: Gemini <noreply@google.com>
Summary
This PR optimizes the Mesh V2 event queue to prevent memory bloating and improve performance when a large number of events are fired (e.g., in a 'forever' loop).
Changes
MAX_EVENT_QUEUE_SIZE(default 100) to limit the number of queued events.processNextBroadcastto process all events whose timing has arrived in a single frame, instead of just one per frame. This significantly improves responsiveness when receiving many events.Test Coverage
test/unit/mesh_service_v2.js:fireEvent deduplicates eventstest.fireEvent respects MAX_EVENT_QUEUE_SIZE (FIFO)test.reportEventStatsIfNeeded logs stats every 10stest.cleanup reports final statstest.processNextBroadcasttest to verify batch processing.test/unit/mesh_service_v2_integration.js:Addressing smalruby/smalruby3-gui#500
🤖 Generated with Gemini Code
Co-Authored-By: Gemini noreply@google.com