Conversation
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Add dual input pins: 'audio' (Opus) and 'video' (VP9), both optional - Add video track via VideoCodecId::VP9 with configurable width/height - Multiplex audio and video frames using tokio::select! in receive loop - Track monotonic timestamps across tracks (clamp to last_written_ns) - Convert timestamps from microseconds to nanoseconds for webm crate - Dynamic content-type: video/webm;codecs="vp9,opus" | vp9 | opus - Extract flush logic into flush_output() helper - Add video_width/video_height to WebMMuxerConfig - Add MuxTracks struct and webm_content_type() const helper - Update node registration description - Add test: VP9 video-only encode->mux produces parseable WebM - Add test: no-inputs-connected returns error - Update existing tests to use new 'audio' pin name Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- YAML compiler: add Needs::Map variant for named pin targeting - Color Bars Generator: SMPTE I420 source node (video::colorbars) - MoQ Peer: video input pin, catalog with VP9, track publishing - Frontend: generalize MSEPlayer for audio/video, ConvertView video support - Frontend: MoQ video playback via Hang Video.Renderer in StreamView - Sample pipelines: oneshot (color bars -> VP9 -> WebM) and dynamic (MoQ stream) Signed-off-by: Devin AI <devin@cognition.ai> Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Detect pipelines without http_input as no-input (hides upload UI) - Add checkIfVideoPipeline helper for video pipeline detection - Update output mode label: 'Play Video' for video pipelines - Derive isVideoPipeline from pipeline YAML via useMemo Signed-off-by: Devin AI <devin@cognition.ai> Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…edia-generic UI messages Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…only pipelines - ColorBarsNode now draws a 4px bright-white vertical bar that sweeps across the frame at 4px/frame, making motion clearly visible. - extractMoqPeerSettings returns hasInputBroadcast so the UI can infer whether a pipeline expects a publisher. - handleTemplateSelect auto-sets enablePublish=false for receive-only pipelines (no input_broadcast), skipping microphone access. - decideConnect respects enablePublish in session mode instead of always forcing shouldPublish=true. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…necessary metadata clones - Add Vp9EncoderDeadline enum (realtime/good_quality/best_quality) to Vp9EncoderConfig, defaulting to Realtime instead of the previous hard-coded VPX_DL_BEST_QUALITY. - Store deadline in Vp9Encoder struct and use it in encode_frame/flush. - Encoder input task: use .take() instead of .clone() on frame metadata since the frame is moved into the channel anyway. - Decoder decode_packet: peek ahead and only clone metadata when multiple frames are produced; move it on the last iteration. - Encoder drain_packets: same peek-ahead pattern to avoid cloning metadata on the last (typically only) output packet. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Add verifyVideoPlayback helper for MSEPlayer video element verification - Add verifyCanvasRendering helper for canvas-based video frame verification - Add convert view test: select video colorbars template, generate, verify video player - Add stream view test: create MoQ video session, connect, verify canvas rendering Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…n text in asset mode - mixing.yml: use 'audio' input pin for webm_muxer instead of default 'in' pin - ConvertView: show 'Convert File' button text when in asset mode (not 'Generate') - test-helpers: fix prettier formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…ction Replace fixed 'audio'/'video' pin names with generic 'in'/'in_1' pins that accept both EncodedAudio(Opus) and EncodedVideo(VP9). The actual media type is detected at runtime by inspecting the first packet's content_type field (video/* → video track, everything else → audio). This makes the muxer future-proof for additional track types (subtitles, data channels, etc.) without requiring pin-name changes. Pin layout is config-driven: - Default (no video dimensions): single 'in' pin — fully backward compatible with existing audio-only pipelines. - With video_width/video_height > 0: two pins 'in' + 'in_1'. Updated all affected sample pipelines and documentation. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…input_types Replace packet probing with connection-time media type detection. The graph builder now populates NodeContext.input_types with the upstream output's PacketType for each connected pin, so the webm muxer can classify inputs as audio or video without inspecting any packets. Changes: - Add input_types: HashMap<String, PacketType> to NodeContext - Populate input_types in graph_builder (oneshot pipelines) - Leave empty in dynamic_actor (connections happen after spawn) - Refactor WebMMuxerNode::run() to use input_types instead of probing - Remove first-packet buffering logic from receive loop - Update all NodeContext constructions in test code - Update docs to reflect connection-time detection Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…lays, and spawn_blocking Implements the video::compositor node (PR3 from VIDEO_SUPPORT_PLAN.md): - Dynamic input pins (PinCardinality::Dynamic) for attaching arbitrary raw video inputs at runtime - RGBA8 output canvas with configurable dimensions (default 1280x720) - Image overlays: decoded once at init via the `image` crate (PNG/JPEG) - Text overlays: rasterized once per UpdateParams via `tiny-skia` - Compositing runs in spawn_blocking to avoid blocking the async runtime - Nearest-neighbor scaling for MVP (bilinear/GPU follow-up) - Per-layer opacity and rect positioning - NodeControlMessage::UpdateParams support for live parameter tuning - Pool-based buffer allocation via VideoFramePool - Metadata propagation (timestamp, duration, sequence) from first input New dependencies: - image 0.25.9 (MIT/Apache-2.0) — PNG/JPEG decoding, features: png, jpeg - tiny-skia 0.12.0 (BSD-3-Clause) — 2D rendering, pure Rust - base64 0.22 (MIT/Apache-2.0) — base64 decoding for image overlay data 14 tests covering compositing helpers, config validation, node integration, metadata preservation, and pool usage. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Fix shutdown propagation: add should_stop flag so Shutdown in the non-blocking try_recv loop properly breaks the outer loop instead of falling through to an extra composite pass. - Fix canvas resize: remove stale canvas_w/canvas_h locals captured once at init; read self.config.width/height directly so UpdateParams dimension changes take effect immediately. - Fix image overlay re-decode: always re-decode image overlays on UpdateParams, not only when the count changes (content/rect/opacity changes were silently ignored). - Add video_compositor_demo.yml oneshot sample pipeline: colorbars → compositor (with text overlay) → VP9 → WebM → HTTP output. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
serde_saphyr cannot deserialize YAML with 4+ nesting levels inside params when the top-level type is an untagged enum (UserPipeline). Text/image overlays with nested rect objects trigger this limitation. Removed text_overlays from the static sample YAML. Overlays can still be configured at runtime via UpdateParams (JSON, not serde_saphyr). Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…t pipelines Mirrors the AudioMixerNode pattern: when num_inputs is set in params, pre-create input pins so the graph builder can wire connections at startup. Single input uses pin name 'in' (matching YAML convention), multiple inputs use 'in_0', 'in_1', etc. The sample pipeline now sets num_inputs: 1 so the compositor declares the 'in' pin that the graph builder expects. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Colorbars node: add pixel_format config (i420 default, rgba8 supported) with RGBA8 generation + sweep bar functions - Compositor: accept both I420 and RGBA8 inputs (auto-converts I420 to RGBA8 internally for compositing via BT.601 conversion) - Compositor: add output_pixel_format config (rgba8 default, i420 for VP9 encoder compatibility) with RGBA8→I420 output conversion - Sample pipeline: uses I420 colorbars → compositor (output_pixel_format: i420) → VP9 encoder → WebM muxer → HTTP output Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
The non-blocking try_recv loop was draining all queued frames and keeping only the latest per slot. When spawn_blocking compositing was slower than the producer (colorbars at 90 frames), intermediate frames were dropped, resulting in only 2 output frames. Changed to take at most one frame per slot per loop iteration so every produced frame is composited and forwarded downstream. Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Non-first layers without explicit layers config are auto-positioned as PiP windows (bottom-right corner, 1/3 canvas size, 0.9 opacity) - Sample pipeline now uses two colorbars sources: 640x480 I420 background + 320x240 RGBA8 PiP overlay, making compositing visually obvious Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Previously I420→RGBA8 (input) and RGBA8→I420 (output) conversions ran on the async runtime, blocking it for ~307K pixel iterations per frame per input. Now all conversions run inside the spawn_blocking task alongside compositing, keeping the async runtime free for channel ops. - Removed ensure_rgba8() calls from frame receive paths - Store raw frames (I420 or RGBA8) in InputSlot.latest_frame - Added pixel_format field to LayerSnapshot - composite_frame() converts I420→RGBA8 on-the-fly per layer - RGBA8→I420 output conversion also runs inside spawn_blocking Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…its (#159) * fix: include view data in pipeline responses and skip zero-delta commits Root cause: two interacting bugs caused stale compositor layout data. A) Missing initial view data: The compositor server emits resolved layout on startup, and the engine stores it in node_view_data, but pipeline API responses never included this snapshot. New clients fell back to config-parsed positions which could disagree with the server's aspect-fit adjusted layout. B) Zero-delta click commits: handlePointerUp fired throttledConfigChange unconditionally — even for click-to-select with zero movement. This sent stale config-parsed positions for ALL layers, overwriting the server's correct resolved layout. Server-side: - Add Session::get_node_view_data() to expose engine's stored view data - Add Pipeline.view_data field (runtime-only, like Node.state) - Populate view_data in both REST and WebSocket get_pipeline handlers Client-side: - Extract pipeline.view_data into sessionStore.nodeViewData on receipt - Guard handlePointerUp: skip config commit when pointer delta is zero - Regenerate TypeScript types for the new Pipeline.view_data field Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: format compositorDragResize test file Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: preserve server geometry in Monitor view 'sync from props' effect The 'sync from props' effect re-parses layers from params on every params change. In Monitor view, this overwrites the server's resolved layout (e.g. aspect-fit rects) with config-derived positions (e.g. rect:null → full canvas), causing a visible revert ~2s after dragging. Fix: add preserveGeometry option to mergeOverlayState. When sessionId is set (Monitor view), existing layer geometry (x, y, width, height) is preserved from current state (set by useServerLayoutSync) instead of being overwritten by parsed params. New/removed layers still sync correctly, and non-geometric fields (opacity, zIndex, visibility, text content) continue to flow through from params. Also fix test fixture: use camelCase property names matching LayerState interface and include all required fields. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: preserve all server-resolved fields in Monitor view merge The previous preserveGeometry fix only kept x, y, width, height from existing state while rebuilding the overlay from parsed params. This lost runtime-only fields (measuredTextWidth, measuredTextHeight) and allowed stale param-derived values for opacity, rotation, z-index, and mirror flags to overwrite the server's resolved state. Now when preserveGeometry is true (Monitor view), mergeOverlayState starts from the full existing overlay and only overlays type-specific config fields (text, fontSize, fontName, color, dataBase64) from the parsed params. This preserves all server-resolved spatial fields AND any runtime measurements applied by useServerLayoutSync. Adds unit tests for mergeOverlayState covering: - OverlayBase field preservation from existing state - Config field updates from parsed params - Runtime field (measuredTextWidth) retention - Referential equality when nothing changed - Image overlay support - New/removed overlay handling Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * test: add integration tests for Monitor view compositor data flow Seven integration tests that exercise the full data flow between useServerLayoutSync (server-driven layout) and the 'sync from props' effect (config-driven state) in Monitor view: 1. Server-resolved layer positions survive params echo-back 2. Server text measurements (measuredTextWidth) survive echo-back 3. Config changes (text, fontSize) are picked up while preserving server geometry 4. Server-resolved opacity/rotation/zIndex survive echo-back 5. Layer focus changes do not reset video layer positions 6. Layer focus changes do not reset text overlay size/position 7. Design view (control) still uses parsed positions as source of truth These tests would have caught all four variants of the stale data regression (bugs A–D in #159). Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: add hasExtraChanges comparator for image overlay merge in Monitor view Without the comparator, dataBase64 changes from other clients are silently dropped in Monitor view because the change-detection logic sees no OverlayBase field differences and returns the old array unchanged. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * test: add integration test for image overlay dataBase64 change in Monitor view Covers the bug where missing hasExtraChanges comparator caused dataBase64 updates from other clients to be silently dropped in Monitor view. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com>
- Fix import order in compositorDragResize.test.ts (react before vitest) - Fix clippy::ignored_unit_patterns in compositor/mod.rs (_ -> ()) - Add libvpx-dev to E2E workflow for VP9 build support Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…ture Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
…erfiles VP9 video support requires libvpx. Add the build-time dev package to all builder stages and the runtime shared library to all final images. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- Rename single-char locals w/h to luma_w/luma_h in make_{i420,nv12}_frame
- Replace unwrap() with expect() in kernel crop tests
- Remove redundant clone in test_crop_no_zoom_returns_full_frame
Signed-off-by: StreamKit Devin <devin@streamkit.dev>
Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| @@ -25,8 +27,10 @@ Muxes Opus audio into a WebM container. Produces streamable WebM/Opus output com | |||
| | --- | --- | --- | --- | --- | | |||
| | `channels` | `integer (uint32)` | no | `2` | Number of audio channels (1 for mono, 2 for stereo)<br />min: `0` | | |||
| | `chunk_size` | `integer (uint)` | no | `65536` | The number of bytes to buffer before flushing to the output. Defaults to 65536.<br />min: `0` | | |||
There was a problem hiding this comment.
🚩 WebM muxer docs still reference removed chunk_size field
The WebMMuxerConfig struct had its chunk_size field removed in this PR (the streaming mode now flushes incrementally on every frame write). However, the generated docs at docs/src/content/docs/reference/nodes/containers-webm-muxer.md:29 still list chunk_size as a parameter with default 65536. Since these docs appear to be auto-generated by gen-docs-reference.rs, the stale entry likely means the docs were not regenerated after the schema change. This is a documentation staleness issue rather than a code bug — running just gen-docs should fix it.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
apps/skit/src/state.rs
Outdated
| pub async fn drain(&self, timeout: std::time::Duration) -> usize { | ||
| let handles: Vec<JoinHandle<()>> = { | ||
| let mut guard = self.handles.lock().await; | ||
| std::mem::take(&mut *guard) | ||
| }; | ||
| let count = handles.len(); | ||
| if count == 0 { | ||
| return 0; | ||
| } | ||
| tracing::info!(count, "Draining background shutdown tasks"); | ||
| let _ = tokio::time::timeout(timeout, futures::future::join_all(handles)).await; | ||
| count |
There was a problem hiding this comment.
🚩 ShutdownTracker drain race: tasks spawned during drain are lost
The ShutdownTracker::drain() method at state.rs:48-58 takes all handles out of the vector under the lock, then releases the lock before awaiting them. If a new session is destroyed (spawning a shutdown task via tracker.track()) during the drain await, that new handle will be added to the now-empty vector but never awaited — drain() has already moved past. In the server shutdown sequence (server.rs:3642), drain() is called before graceful_shutdown(), so there's a small window where a concurrent HTTP DELETE could spawn a task that gets orphaned. The 10-second graceful shutdown timeout provides a secondary safety net, but the task would not be explicitly awaited.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Loop until no new handles appear so tasks tracked during a previous batch await are not orphaned. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Removes stale chunk_size parameter from WebM muxer docs and updates packet/node docs to reflect current video support. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| const shouldWatch = state.connectionMode === 'session' || state.enableWatch; | ||
| // NOTE: Session mode no longer implicitly enables publishing. Publishing is | ||
| // now driven entirely by `enablePublish` (which the session setup sets based | ||
| // on whether the pipeline needs client-side media inputs). This was a | ||
| // deliberate change from the old behaviour where session mode always published. | ||
| const shouldPublish = state.enablePublish; |
There was a problem hiding this comment.
🟡 Session mode decideConnect no longer enables publishing, breaking existing audio-only MoQ pipelines
In decideConnect at ui/src/stores/streamStoreHelpers.ts:132, shouldPublish changed from state.connectionMode === 'session' || state.enablePublish to just state.enablePublish. This means session-mode connections will not publish unless enablePublish is explicitly set. The StreamView sets enablePublish via setEnablePublish(moqSettings.hasInputBroadcast) at ui/src/views/StreamView.tsx:458, but only when applyMoqPeerSettings runs — which requires the YAML to be parsed and contain a transport::moq::peer node. Existing audio-only pipelines (like moq_mixing.yml) that relied on session mode implicitly enabling publishing will silently fail to publish because enablePublish defaults to true but applyMoqPeerSettings may set it to false for pipelines without input_broadcast. The comment says this is deliberate, but the MonitorView's handleStartPreview at ui/src/views/MonitorView.tsx:1905 explicitly sets previewSetEnablePublish(false) — showing awareness of this change — while the default enablePublish: true in the store initial state at ui/src/stores/streamStore.ts:132 means the old audio-only template behavior should still work. This is likely correct for receive-only pipelines but is a semantic change that could break existing users who don't update their YAML.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
| private scheduleBatchFlush(): void { | ||
| if (this.batchFlushRafId !== null) return; | ||
| this.batchFlushRafId = requestAnimationFrame(() => this.flushBatchedUpdates()); | ||
| } | ||
|
|
||
| private flushBatchedUpdates(): void { | ||
| this.batchFlushRafId = null; | ||
|
|
||
| // Convert pending Maps to Records and flush everything in a single | ||
| // store mutation via batchUpdateSessionData. This ensures that all | ||
| // WebSocket events from one animation frame produce exactly ONE | ||
| // Zustand set() call, minimising React re-renders. | ||
| const stateUpdates = new Map<string, Record<string, NodeState>>(); | ||
| for (const [sessionId, updates] of this.pendingNodeStates) { | ||
| stateUpdates.set(sessionId, Object.fromEntries(updates)); | ||
| } | ||
| this.pendingNodeStates.clear(); | ||
|
|
||
| const statsUpdates = new Map<string, Record<string, NodeStats>>(); | ||
| for (const [sessionId, updates] of this.pendingNodeStats) { | ||
| statsUpdates.set(sessionId, Object.fromEntries(updates)); | ||
| } | ||
| this.pendingNodeStats.clear(); | ||
|
|
There was a problem hiding this comment.
🚩 WebSocket node-state batching uses requestAnimationFrame which doesn't fire in background tabs
The RAF-based batching in ui/src/services/websocket.ts:257-280 coalesces node-state and node-stats WebSocket events into a single store mutation per animation frame. However, requestAnimationFrame is throttled or suspended by browsers when the tab is in the background. This means that during background execution, the batch buffer will grow unboundedly until the user returns to the tab, at which point a single large flush occurs. For typical session loads (~10 nodes) this is negligible, but for very long background periods with active sessions this could accumulate thousands of entries. The handleSessionDestroyed method at line 214 correctly clears pending buffers for destroyed sessions, mitigating the worst case.
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
Includes a regression test that verifies tasks tracked during an ongoing drain batch are not orphaned. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
* refactor(ui): extract utilities and styles from MonitorView Phase 1 of MonitorView decomposition: - Extract pipeline diff helpers to utils/pipelineDiff.ts - Extract pipeline graph helpers to utils/pipelineGraph.ts - Extract node issue utilities to utils/nodeIssues.ts - Extract styled components to components/monitor/MonitorView.styles.ts MonitorView.tsx reduced from 3685 to 2988 lines (~700 lines extracted). Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: fix formatting in nodeIssues.ts Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(ui): extract component modules from MonitorView Phase 2 of MonitorView decomposition: - Extract SessionItem, SessionInfoChip, SessionUptime, InlineCopyButton to components/monitor/SessionItem.tsx - Extract TopControls to components/monitor/TopControls.tsx - Extract ConnectionStatus to components/monitor/ConnectionStatus.tsx - Extract LeftPanel to components/monitor/LeftPanel.tsx MonitorView.tsx reduced from 2988 to 2347 lines (~640 more lines extracted). Total reduction from original 3685 to 2347 lines (36% smaller). Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: fix formatting in MonitorView.tsx Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(ui): extract Legend component and useMonitorPreview hook Phase 2d + Phase 3a of MonitorView decomposition: - Extract Legend to components/monitor/Legend.tsx - Extract useMonitorPreview hook to hooks/useMonitorPreview.ts (encapsulates MoQ preview connection, teardown, and pipeline-aware config) MonitorView.tsx reduced from 2347 to 2206 lines. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(ui): extract useAutoLayout and useNodeStatesSubscription hooks Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: address review items — memo comparator, background ternary, fitView timer cleanup - TopControls: compare blocking error count (type === 'error') in addition to array length so a warning→error swap at same length triggers re-render - MonitorView.styles: fix ConnectionStatusContainer background to differ between connected (overlay-medium) and disconnected (danger tint) states - useAutoLayout: track fitView setTimeout via fitTimerRef and cancel on unmount for proper cleanup Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com>
- Reorder @xyflow/react type imports before react in useAutoLayout and useNodeStatesSubscription - Fix alphabetical import order in MonitorView (Legend after LeftPanel, pipelineDiff/pipelineGraph after logger) Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
- quinn-proto 0.11.13 → 0.11.14 (RUSTSEC-2026-0037: DoS fix) - wasmtime 41.0.3 → 41.0.4 (RUSTSEC-2026-0020, -0021, -0022) Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
| updateNodeState: (sessionId, nodeId, state) => | ||
| set((prev) => { | ||
| const session = prev.sessions.get(sessionId); | ||
| if (!session) { | ||
| // Initialize session if it doesn't exist | ||
| const newSessions = new Map(prev.sessions); | ||
| newSessions.set(sessionId, { | ||
| pipeline: null, | ||
| nodeStates: { [nodeId]: state }, | ||
| nodeStats: {}, | ||
| isConnected: false, | ||
| }); | ||
| return { sessions: newSessions }; | ||
| } | ||
| if (!session) return prev; // Ignore updates for unknown/destroyed sessions | ||
|
|
||
| const newSessions = new Map(prev.sessions); | ||
| newSessions.set(sessionId, { |
There was a problem hiding this comment.
🚩 updateNodeState/updateNodeStats now silently ignore unknown sessions
The session store's updateNodeState and updateNodeStats were changed from auto-creating session entries to returning prev (no-op) when the session doesn't exist (ui/src/stores/sessionStore.ts:60,73). A new initSession method was added that must be called first. This is a behavioral change from the previous version where any WebSocket event would auto-create a session entry. The WebSocket service's subscribeToSession now calls initSession (ui/src/services/websocket.ts:425), and the test suite was updated accordingly. However, if any code path sends state/stats updates before initSession is called (e.g., a race between WS event delivery and subscription), those updates will be silently dropped. The RAF-based batching (scheduleBatchFlush) adds another layer where this race could manifest: events buffered before initSession is called would be flushed but silently ignored.
(Refers to lines 57-68)
Was this helpful? React with 👍 or 👎 to provide feedback.
Debug
omitted the Opus codec from the MIME type when video dimensions were set. This caused the oneshot HTTP Content-Type to report only "vp9" for combined audio+video pipelines, breaking MSE consumers that need to initialise an audio SourceBuffer. Now conservatively assumes audio is always present — advertising "vp9,opus" is safe even for video-only streams (consumers simply won't find an Opus track). Includes regression tests for the static content_type() method. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
resubscribeToSessions now calls initSession before re-sending the subscribe message. This prevents events from being silently dropped if the session entry was cleared during the disconnect window. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
The WebM muxer's static content_type() hint now conservatively advertises both vp9+opus when video dims are set (to avoid breaking MSE consumers on combined A+V pipelines). Video-only oneshot pipelines should override via http_output's content_type param. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Update documentation across the repo to reflect that basic video support (VP9 encode/decode, compositor, WebM muxing) is now implemented on the video branch, rather than being future/roadmap-only. Changes: - README.md: update tagline, screenshot caption, add video compositing use case, update media focus from audio-first to audio+video - ROADMAP.md: strike through shipped video items (packet types, VP9 baseline, compositor MVP) - docs landing page: update hero tagline, who-is-this-for, add video compositing to what-you-can-build - docs architecture overview: add video to built-in node categories - docs performance guide: update audio->audio/video, add VideoFramePool mention - crates/nodes README: add video::* to node implementation list Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com>
In dynamic pipelines NodeContext::input_types is always empty because connections are wired after nodes are spawned. The WebM muxer's input classification loop checked input_types to decide whether a pin carries video or audio. When empty, is_video was always false, so all inputs defaulted to audio — triggering 'multiple audio inputs detected' when a video encoder was connected. Fix: when input_types is empty, fall back to first-packet inspection. Receive one packet from each channel and classify from the Binary packet's content_type field (e.g. 'video/vp9' → video, None → audio). Inspected packets are buffered and replayed after segment setup, reusing the existing first_video_packet replay mechanism and adding an analogous first_audio_packet replay. Add two regression tests: - test_webm_mux_dynamic_pipeline_classifies_inputs_from_packets: A+V with empty input_types — verifies correct classification and output. - test_webm_mux_dynamic_pipeline_video_only: single video input with empty input_types and dimension auto-detect. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com>
Summary
PR adds initial (experimental) video support to StreamKit. It's still quite limited but should be mostly functional: