Skip to content

Commit 7864251

Browse files
streamer45streamkit-devinstaging-devin-ai-integration[bot]bot_apk
authored
Initial video support (#162)
* chore: update roadmap * feat(video): update packet types, docs, and compatibility rules * feat(video): make raw video layout explicit + enforce aligned buffers * feat(webm): extend muxer with VP9 video track support (PR4) - Add dual input pins: 'audio' (Opus) and 'video' (VP9), both optional - Add video track via VideoCodecId::VP9 with configurable width/height - Multiplex audio and video frames using tokio::select! in receive loop - Track monotonic timestamps across tracks (clamp to last_written_ns) - Convert timestamps from microseconds to nanoseconds for webm crate - Dynamic content-type: video/webm;codecs="vp9,opus" | vp9 | opus - Extract flush logic into flush_output() helper - Add video_width/video_height to WebMMuxerConfig - Add MuxTracks struct and webm_content_type() const helper - Update node registration description - Add test: VP9 video-only encode->mux produces parseable WebM - Add test: no-inputs-connected returns error - Update existing tests to use new 'audio' pin name Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat: end-to-end video pipeline support - YAML compiler: add Needs::Map variant for named pin targeting - Color Bars Generator: SMPTE I420 source node (video::colorbars) - MoQ Peer: video input pin, catalog with VP9, track publishing - Frontend: generalize MSEPlayer for audio/video, ConvertView video support - Frontend: MoQ video playback via Hang Video.Renderer in StreamView - Sample pipelines: oneshot (color bars -> VP9 -> WebM) and dynamic (MoQ stream) Signed-off-by: Devin AI <devin@cognition.ai> Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): video-aware ConvertView for no-input pipelines - Detect pipelines without http_input as no-input (hides upload UI) - Add checkIfVideoPipeline helper for video pipeline detection - Update output mode label: 'Play Video' for video pipelines - Derive isVideoPipeline from pipeline YAML via useMemo Signed-off-by: Devin AI <devin@cognition.ai> Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(server): allow generator-only oneshot pipelines without http_input Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(engine): allow generator-only oneshot pipelines without file_reader Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(nodes): enable video feature (vp9 + colorbars) in default features Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: generator pipeline start signals, video-only content-type, and media-generic UI messages Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix * feat: add sweep bar animation to colorbars, skip publish for receive-only pipelines - ColorBarsNode now draws a 4px bright-white vertical bar that sweeps across the frame at 4px/frame, making motion clearly visible. - extractMoqPeerSettings returns hasInputBroadcast so the UI can infer whether a pipeline expects a publisher. - handleTemplateSelect auto-sets enablePublish=false for receive-only pipelines (no input_broadcast), skipping microphone access. - decideConnect respects enablePublish in session mode instead of always forcing shouldPublish=true. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(vp9): configurable encoder deadline (default realtime), avoid unnecessary metadata clones - Add Vp9EncoderDeadline enum (realtime/good_quality/best_quality) to Vp9EncoderConfig, defaulting to Realtime instead of the previous hard-coded VPX_DL_BEST_QUALITY. - Store deadline in Vp9Encoder struct and use it in encode_frame/flush. - Encoder input task: use .take() instead of .clone() on frame metadata since the frame is moved into the channel anyway. - Decoder decode_packet: peek ahead and only clone metadata when multiple frames are produced; move it on the last iteration. - Encoder drain_packets: same peek-ahead pattern to avoid cloning metadata on the last (typically only) output packet. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: cargo fmt Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * test(e2e): add video pipeline tests for convert and MoQ stream views - Add verifyVideoPlayback helper for MSEPlayer video element verification - Add verifyCanvasRendering helper for canvas-based video frame verification - Add convert view test: select video colorbars template, generate, verify video player - Add stream view test: create MoQ video session, connect, verify canvas rendering Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: correct webm_muxer pin name in mixing pipeline and convert button text in asset mode - mixing.yml: use 'audio' input pin for webm_muxer instead of default 'in' pin - ConvertView: show 'Convert File' button text when in asset mode (not 'Generate') - test-helpers: fix prettier formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(webm-muxer): generic input pins with runtime media type detection Replace fixed 'audio'/'video' pin names with generic 'in'/'in_1' pins that accept both EncodedAudio(Opus) and EncodedVideo(VP9). The actual media type is detected at runtime by inspecting the first packet's content_type field (video/* → video track, everything else → audio). This makes the muxer future-proof for additional track types (subtitles, data channels, etc.) without requiring pin-name changes. Pin layout is config-driven: - Default (no video dimensions): single 'in' pin — fully backward compatible with existing audio-only pipelines. - With video_width/video_height > 0: two pins 'in' + 'in_1'. Updated all affected sample pipelines and documentation. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: cargo fmt Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(webm-muxer): connection-time type detection via NodeContext.input_types Replace packet probing with connection-time media type detection. The graph builder now populates NodeContext.input_types with the upstream output's PacketType for each connected pin, so the webm muxer can classify inputs as audio or video without inspecting any packets. Changes: - Add input_types: HashMap<String, PacketType> to NodeContext - Populate input_types in graph_builder (oneshot pipelines) - Leave empty in dynamic_actor (connections happen after spawn) - Refactor WebMMuxerNode::run() to use input_types instead of probing - Remove first-packet buffering logic from receive loop - Update all NodeContext constructions in test code - Update docs to reflect connection-time detection Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): add video compositor node with dynamic inputs, overlays, and spawn_blocking Implements the video::compositor node (PR3 from VIDEO_SUPPORT_PLAN.md): - Dynamic input pins (PinCardinality::Dynamic) for attaching arbitrary raw video inputs at runtime - RGBA8 output canvas with configurable dimensions (default 1280x720) - Image overlays: decoded once at init via the `image` crate (PNG/JPEG) - Text overlays: rasterized once per UpdateParams via `tiny-skia` - Compositing runs in spawn_blocking to avoid blocking the async runtime - Nearest-neighbor scaling for MVP (bilinear/GPU follow-up) - Per-layer opacity and rect positioning - NodeControlMessage::UpdateParams support for live parameter tuning - Pool-based buffer allocation via VideoFramePool - Metadata propagation (timestamp, duration, sequence) from first input New dependencies: - image 0.25.9 (MIT/Apache-2.0) — PNG/JPEG decoding, features: png, jpeg - tiny-skia 0.12.0 (BSD-3-Clause) — 2D rendering, pure Rust - base64 0.22 (MIT/Apache-2.0) — base64 decoding for image overlay data 14 tests covering compositing helpers, config validation, node integration, metadata preservation, and pool usage. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: cargo fmt Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): address review findings and add sample pipeline - Fix shutdown propagation: add should_stop flag so Shutdown in the non-blocking try_recv loop properly breaks the outer loop instead of falling through to an extra composite pass. - Fix canvas resize: remove stale canvas_w/canvas_h locals captured once at init; read self.config.width/height directly so UpdateParams dimension changes take effect immediately. - Fix image overlay re-decode: always re-decode image overlays on UpdateParams, not only when the count changes (content/rect/opacity changes were silently ignored). - Add video_compositor_demo.yml oneshot sample pipeline: colorbars → compositor (with text overlay) → VP9 → WebM → HTTP output. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): use single needs variant in sample pipeline YAML Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): remove deeply nested params from sample YAML serde_saphyr cannot deserialize YAML with 4+ nesting levels inside params when the top-level type is an untagged enum (UserPipeline). Text/image overlays with nested rect objects trigger this limitation. Removed text_overlays from the static sample YAML. Overlays can still be configured at runtime via UpdateParams (JSON, not serde_saphyr). Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): add num_inputs for static pin pre-creation in oneshot pipelines Mirrors the AudioMixerNode pattern: when num_inputs is set in params, pre-create input pins so the graph builder can wire connections at startup. Single input uses pin name 'in' (matching YAML convention), multiple inputs use 'in_0', 'in_1', etc. The sample pipeline now sets num_inputs: 1 so the compositor declares the 'in' pin that the graph builder expects. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): accept I420 inputs and configurable output format - Colorbars node: add pixel_format config (i420 default, rgba8 supported) with RGBA8 generation + sweep bar functions - Compositor: accept both I420 and RGBA8 inputs (auto-converts I420 to RGBA8 internally for compositing via BT.601 conversion) - Compositor: add output_pixel_format config (rgba8 default, i420 for VP9 encoder compatibility) with RGBA8→I420 output conversion - Sample pipeline: uses I420 colorbars → compositor (output_pixel_format: i420) → VP9 encoder → WebM muxer → HTTP output Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): process every frame instead of draining to latest The non-blocking try_recv loop was draining all queued frames and keeping only the latest per slot. When spawn_blocking compositing was slower than the producer (colorbars at 90 frames), intermediate frames were dropped, resulting in only 2 output frames. Changed to take at most one frame per slot per loop iteration so every produced frame is composited and forwarded downstream. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): auto-PiP positioning and two-input sample pipeline - Non-first layers without explicit layers config are auto-positioned as PiP windows (bottom-right corner, 1/3 canvas size, 0.9 opacity) - Sample pipeline now uses two colorbars sources: 640x480 I420 background + 320x240 RGBA8 PiP overlay, making compositing visually obvious Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): move all pixel format conversions into spawn_blocking Previously I420→RGBA8 (input) and RGBA8→I420 (output) conversions ran on the async runtime, blocking it for ~307K pixel iterations per frame per input. Now all conversions run inside the spawn_blocking task alongside compositing, keeping the async runtime free for channel ops. - Removed ensure_rgba8() calls from frame receive paths - Store raw frames (I420 or RGBA8) in InputSlot.latest_frame - Added pixel_format field to LayerSnapshot - composite_frame() converts I420→RGBA8 on-the-fly per layer - RGBA8→I420 output conversion also runs inside spawn_blocking Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): parallelize with rayon and use persistent blocking thread - Add rayon as optional dependency gated on compositor feature - Parallelize scale_blit_rgba() across rows using rayon::par_chunks_mut - Split blit into blit_row_opaque (no alpha multiply) and blit_row_alpha - Parallelize i420_to_rgba8() and rgba8_to_i420() row processing - Replace per-frame spawn_blocking with persistent blocking thread via channels - Add CompositeWorkItem/CompositeResult types for channel communication Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(compositor): modularize into config, overlay, pixel_ops, and kernel sub-modules Split the 1700+ line compositor.rs into focused sub-modules: - config.rs: configuration types, validation, pixel format parsing - overlay.rs: DecodedOverlay, image decoding, text rasterization - pixel_ops.rs: scale_blit_rgba, blit_row*, blit_overlay, i420/rgba8 conversion - kernel.rs: LayerSnapshot, CompositeWorkItem/Result, composite_frame - mod.rs: CompositorNode, run loop, registration, tests Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): 5 high-impact video compositing optimizations 1. Pool intermediate color conversion buffers: i420_to_rgba8_buf and rgba8_to_i420_buf write into caller-provided buffers instead of allocating fresh Vec's every frame (~34 MB/s allocation churn eliminated). Persistent scratch buffers are reused across frames in the compositing thread. 2. I420 pass-through: when a single I420 layer fills the full canvas with no overlays and output is I420, skip the entire I420→RGBA8→I420 round-trip. 3. Vectorize inner loops: process 4 pixels at a time in color conversion loops with hoisted row bases to help LLVM auto-vectorize. 4. Arc overlays: wrap DecodedOverlay in Arc so per-frame clones into the CompositeWorkItem are cheap reference-count bumps instead of deep copies. 5. Integer-only alpha blending: replace f32 blend math in blit_row_opaque and blit_row_alpha with fixed-point integer arithmetic using the ((val + (val >> 8)) >> 8) fast approximation of division by 255. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): fix regression — replace broken chunking with slice iterators The previous 4-pixel chunking approach (for chunk in 0..chunks { for i in 0..4 }) added MORE Range::next overhead instead of helping vectorization. Fixes: - i420_to_rgba8_buf: use chunks_exact_mut(4) on output + sub-sliced input planes to eliminate Range::next calls AND bounds checks entirely - rgba8_to_i420_buf Y plane: use chunks_exact(4) on input RGBA row with enumerate() instead of range-based indexing - I420 passthrough: return layer index instead of Arc, copy data into pooled buffer directly (Arc::try_unwrap always failed since the original frame still holds a ref, causing a wasteful .to_vec()) Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): revert chunks_exact to simple for-loops chunks_exact(4).enumerate() added MORE overhead than Range::next: - ChunksExact::next -> split_at_checked -> split_at_unchecked -> from_raw_parts chain consumed ~33% CPU vs original ~14% from Range::next. - Enumerate::next alone was 15.33% of total CPU. Revert to simple 'for col in 0..w' with pre-computed row bases. The buffer pooling (optimization #1) is confirmed working well via DHAT: ~1GB alloc churn eliminated. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): eliminate double-copy in I420 output path Write rgba8_to_i420_buf directly into the pooled output buffer instead of going through an intermediate scratch buffer + copy_from_slice. This removes a full extra memcpy of the I420 data every frame. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * bench: add compositor pipeline benchmark for profiling Adds a standalone benchmark binary that runs the compositing oneshot pipeline (colorbars → compositor → vp9 → webm → http_output) and reports wall-clock time, throughput (fps), per-frame latency, and output bytes. Supports CLI args for profiling flexibility: --width, --height, --fps, --frames, --iterations Usage: cargo bench -p streamkit-engine --bench compositor_pipeline cargo bench -p streamkit-engine --bench compositor_pipeline -- --frames 300 --width 1280 --height 720 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: resolve clippy lint errors in video nodes Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: resolve remaining clippy lint errors in video nodes Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: make lint pass after metadata updates Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * chore: update native plugin lockfiles Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(webm): skip intermediate flushes in File mode to prevent finalize failure In File mode, the SharedPacketBuffer was being drained during the mux loop via flush_output(). When segment.finalize() subsequently tried to seek backward to backpatch the EBML header (duration, cues), those bytes had already been moved out of the buffer, causing finalize to fail. Fix: guard flush_output calls with an is_file_mode flag so the entire buffer remains intact until finalize() completes. The post-finalize flush already handles emitting the complete finalized bytes. Also adds libvpx-dev to the CI runner's apt packages (lint, test, build jobs) so the vp9 feature compiles on GitHub Actions. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(webm): use Live mode for VP9 mux test to avoid unbounded memory The previous fix kept the entire WebM buffer in memory during File mode to allow finalize() backward seeks. This would cause unbounded memory growth for long streams. Instead, switch the test to Live mode (the default and intended streaming use case). Live mode uses a non-seek writer with zero-copy streaming drain, keeping memory bounded. The test assertions (EBML header, content type) don't require File mode. Reverts the is_file_mode flush guard from the previous commit since it's no longer needed. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): handle non-video packets and single channel close in recv_from_any_slot Introduces SlotRecvResult enum with Frame/ChannelClosed/NonVideo/Empty variants. The main loop now removes closed slots and skips non-video packets instead of treating any single channel close as all-inputs-closed. Also adds a comment about dropped in-flight results on shutdown (Fix #6). Optimizes overlay cloning by using Arc<[Arc<DecodedOverlay>]> instead of Vec<Arc<DecodedOverlay>> so cloning into the work item each frame is a single ref-count bump instead of a full Vec clone (Fix #8). Fixes: #1, #6, #8 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(webm): restore streaming-mode guard in flush_output Pass streaming_mode into flush_output and skip all intermediate flushes in File mode. In File mode the writer supports seeking and may back-patch segment sizes/cues, so draining the buffer after every frame would send stale bytes that get overwritten later, corrupting the output. Fix #2 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(moq): remove hardcoded catalog dimensions and add clean shutdown Thread video_width and video_height from MoqPeerConfig through to create_and_publish_catalog instead of hardcoding 640x480. Add fields to BidirectionalTaskConfig so the bidirectional path also gets the correct dimensions. Add clean shutdown when both audio and video pipeline inputs close: each input branch now explicitly handles None (channel closed), sets its rx to None, and breaks when both are done. Fixes #3, #4 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(vp9): improve encoder/decoder allocations and add shutdown comments - Change next_pts duration default from 0 to 1 so libvpx rate-control always sees a non-zero duration (Fix #5). - Add comment about data loss on explicit encoder shutdown (Fix #7). - Use Bytes::copy_from_slice in drain_packets instead of .to_vec() + Bytes::from(), avoiding an intermediate Vec allocation per encoded packet (Fix #9). - Use Vec::with_capacity(1) in decode_packet since most VP9 packets produce exactly one frame, avoiding a heap alloc in the common case (Fix #10). Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(video): extract shared parse_pixel_format utility Move the duplicated parse_pixel_format function from colorbars.rs and compositor/config.rs into video/mod.rs as a shared utility. Both modules now re-export it from the parent module. Also includes cargo fmt formatting fixes from the previous commits. Fix #11 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: sweep bar clipping, WebM auto-detect dims, output filename - colorbars: clip sweep bar at frame edge instead of wrapping via modulo, preventing the bar from appearing split across PiP boundaries - webm: auto-detect video dimensions from first VP9 keyframe when video_width/video_height are not configured (both 0). Parses the VP9 uncompressed header to extract width/height, buffers the first packet, and replays it after segment creation. This eliminates the need to manually keep muxer dimensions in sync with the upstream encoder. - ui: change download filename from 'converted_audio_converted.webm' to 'output.[ext]' when no source file is available; keep the '{name}_converted' pattern only when a real input file exists Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt to webm muxer Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf: collapse SharedPacketBuffer mutexes, bump pool max, zero-alloc compositor poll - Collapse triple-mutex SharedPacketBuffer into single Mutex<BufferState> to eliminate lock-ordering risk between cursor, last_sent_pos, and base_offset. - Bump DEFAULT_VIDEO_MAX_BUFFERS_PER_BUCKET from 8 to 16 to reduce pool misses in deep pipelines (colorbars → compositor → encoder → muxer → transport can easily have 8+ frames in flight). - Replace select_all + Vec<Box<Pin<Future>>> in compositor recv_from_any_slot with zero-allocation poll_fn that calls poll_recv directly on each slot receiver. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(sample): add pacer node to video compositor demo for real-time playback Without the pacer, colorbars in batch mode (frame_count > 0) generates all frames as fast as possible with no real-time pacing. The WebM muxer flushes each frame immediately in live mode, flooding the http_output with the entire stream faster than real-time, causing browsers to buffer heavily. Insert core::pacer between webm_muxer and http_output to release muxed chunks at the rate indicated by their duration_us metadata (~33ms per frame at 30fps), matching real-time playback expectations. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(engine): walk connection graph backwards for content-type resolution When passthrough-style nodes (core::pacer, core::passthrough, core::telemetry_tap, etc.) are inserted between the content-producing node and http_output, the oneshot runner previously only checked the immediate predecessor of http_output for content_type(). Since those utility nodes return None, the response fell back to application/octet-stream, causing browsers to misdetect the stream. Now the runner walks backwards through the connection graph until it finds a node that declares a content_type, so inserting any number of passthrough nodes before http_output preserves the correct MIME. Also suppresses clippy::significant_drop_tightening on the SharedPacketBuffer methods where the mutex guard intentionally spans the entire take-trim-update / seek-compute sequence. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): sort input slots by pin name for deterministic layer ordering HashMap::drain() has non-deterministic iteration order, so the compositor slots could randomly swap which input becomes the background (idx 0) vs. the PiP overlay (idx > 0). This caused two user-visible issues: 1. Background/PiP resolution swap: the 1280×720 colorbars sometimes ended up in the PiP slot and the 320×240 in the background slot. 2. Sweep bar appearing to extend beyond PiP boundaries: a consequence of the resolution swap — the large-resolution sweep bar interacts visually with the small-resolution background at the PiP boundary. Fix: sort the drained inputs numerically by their 'in_N' pin suffix before populating the slots Vec, so in_0 always comes before in_1. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): add z_index to LayerConfig for explicit layer stacking order Adds a z_index field (i32, default 0) to LayerConfig and LayerSnapshot. Layers are sorted by z_index before compositing — lower values are drawn first (bottom of the stack). Ties are broken by the original slot order. Auto-PiP layers without explicit config get z_index = slot index (so background = 0, first PiP = 1, etc.). Explicit LayerConfig entries can override this to reorder layers at will, including via UpdateParams at runtime. This decouples visual stacking order from pin connection order, which is the correct separation of concerns for a compositor. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt to compositor z_index changes Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf: review fixes — temp file for WebM File mode, Arc unwrap, rayon threshold, saturating sub, config struct - Fix #6: Use saturating_sub for MoQ Peer subscriber count to prevent underflow - Fix #11: Skip memcpy in I420 passthrough when Arc has sole ownership (try_unwrap) - Fix #12: Add minimum-row threshold for rayon parallel pixel ops (skip dispatch for small canvases) - Fix #19: WebM File mode uses on-disk temp file (FileBackedBuffer) instead of unbounded in-memory Vec - Fix #24: Group subscriber params into SubscriberMediaConfig struct, reducing argument counts - Add MuxBuffer enum to unify Live (SharedPacketBuffer) and File (FileBackedBuffer) buffer types - Add tempfile to webm feature gate in Cargo.toml Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): sweep_bar toggle, fontdue text rendering, rotation, signed coords - Add sweep_bar bool to ColorBarsConfig (default true) to gate the animated vertical bar; set false on background to prevent visual bleed through PiP overlays. - Replace placeholder rectangle glyphs with real font rendering via fontdue 0.9. Supports font_path, font_data_base64, and falls back to system DejaVu Sans. Coverage-based alpha-over compositing. - Change Rect.x/y from u32 to i32 for signed (off-screen) positioning. scale_blit_rgba now clips negative source offsets correctly. - Add rotation_degrees (f32, clockwise) to LayerConfig/LayerSnapshot. New scale_blit_rgba_rotated() uses inverse-affine mapping with nearest-neighbor sampling over the axis-aligned bounding box. - Update oneshot demo YAML: sweep_bar false on background, explicit layer config with PiP rect at (380,220) 240x180 rotated 15 degrees. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(demo): add text overlay layer with bundled DejaVu Sans font Add a third layer to the compositor demo: a 'StreamKit Demo' text overlay rendered with fontdue using the bundled DejaVu Sans font. - Bundle DejaVu Sans TTF in assets/fonts/ with its Bitstream Vera license file. - Update demo YAML to include text_overlays with font_path pointing to the bundled font, white text at (20,20) 32px. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: work around serde_saphyr untagged enum limitation for nested YAML serde_saphyr fails to deserialize deeply nested structures (sequences of objects with nested objects, maps with nested objects) when they appear inside #[serde(untagged)] enums. Add parse_yaml() helper to streamkit_api::yaml that uses a two-step approach: YAML -> serde_json::Value -> UserPipeline. This bypasses the serde_saphyr limitation by using serde_json's deserializer for the untagged enum dispatch. Update all three call sites that directly deserialized YAML into UserPipeline: - samples.rs: parse_pipeline_metadata() - server.rs: create_session_handler() - server.rs: parse_config_field() Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt to server.rs Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(demo): move PiP overlay positioning to the left Move the PiP overlay x-coordinate from 380 to 100 so the main canvas blue bar (rightmost SMPTE bar) remains clearly visible and is not obscured by the overlapping PiP layer. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(colorbars): remove sweep_bar parameter entirely Remove the sweep_bar config field, its default function, and both draw_sweep_bar_i420/draw_sweep_bar_rgba8 rendering functions. Also remove the sweep_bar: false reference from the compositor demo YAML. The sweep bar feature is being simplified out for now. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(compositor): add draw_time option with millisecond precision When draw_time is true the compositor renders the current wall-clock time (HH:MM:SS.mmm) in the bottom-left corner of every composited frame using a pre-loaded monospace font (DejaVu Sans Mono). - Add draw_time and draw_time_font_path fields to CompositorConfig - Add load_font_from_path() and rasterize_text_with_font() to overlay - Pre-load font once during init; rasterize per frame in the main loop - Pull DejaVu Sans Mono (royalty-free) into assets/fonts/ - Enable draw_time in the demo pipeline YAML Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt to draw_time changes Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): add edge anti-aliasing for rotated layers Replace the hard binary contains() inside/outside test in scale_blit_rgba_rotated() with a signed-distance-to-edge approach. For each destination pixel the signed distance to all four edges of the un-rotated rectangle is computed. Pixels well inside (dist >= 1) get full alpha; edge pixels (0 < dist < 1) get fractional coverage proportional to the distance; pixels outside (dist <= 0) are skipped. This smooths the staircase zig-zag artifacts on rotated overlay borders. The bounding box is also expanded by 1px on each side to include the anti-aliased fringe. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor(colorbars): move draw_time from compositor to colorbars generator The draw_time feature belongs in the source frame generator (ColorBarsNode), not the composition layer, consistent with how sweep_bar was previously implemented. - Add draw_time + draw_time_font_path fields to ColorBarsConfig - Implement per-frame wall-clock stamping (HH:MM:SS.mmm) in ColorBarsNode using fontdue, supporting both RGBA8 and I420 pixel formats - Remove draw_time logic from CompositorConfig/CompositorNode entirely - Remove unused load_font_from_path and rasterize_text_with_font from overlay - Add fontdue dependency to the colorbars feature - Update demo YAML to configure draw_time on colorbars_bg node Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * refactor: deduplicate and improve video subsystem code quality - Extract shared mux_frame() helper in webm.rs (~120 lines reduced) - Extract generic codec_forward_loop() for VP9 encoder/decoder (~300 lines) - Extract shared blit_text_rgba() utility in video/mod.rs - Parallelize rotated blit with rayon (row-level, RAYON_ROW_THRESHOLD) - Document packed layout assumption in pixel format conversions - Share DEFAULT_VIDEO_FRAME_DURATION_US constant (webm + moq peer) - Share accepted_video_types() in compositor (definition_pins + make_input_pin) Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(ui): add compositor node UI with draggable layer canvas Add visual compositor node UI that allows users to manipulate compositor layers on a scaled canvas. Features include: - Draggable, resizable layer boxes with position/size handles - Opacity, rotation, and z-index sliders per selected layer - Zero-render drag via refs + requestAnimationFrame for smooth UX - Full config updates via new tuneNodeConfig callback - Staging mode support (batch changes or live updates) - LIVE indicator matching AudioGainNode pattern New files: - useCompositorLayers.ts: Hook for layer state management - CompositorCanvas.tsx: Visual canvas component - CompositorNode.tsx: ReactFlow node component Modified files: - useSession.ts: Add tuneNodeConfig for full-config updates - reactFlowDefaults.ts: Register compositor node type - FlowCanvas.tsx: Add compositor to nodeTypes type - MonitorView.tsx: Map video::compositor kind, thread onConfigChange - DesignView.tsx: Map video::compositor kind with defaults Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): collapse unscaled height in compositor canvas via negative margin CSS transform: scale() does not affect the layout box, causing the outer container to reserve the full unscaled height (e.g. 720px). Add marginBottom: canvasHeight * (scale - 1) to collapse the extra space so the compositor node fits tightly in the ReactFlow canvas. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): map video::compositor type in YAML pipeline parser The YAML parser hardcoded all non-gain nodes to 'configurable' type, so compositor nodes imported via YAML would not get the custom CompositorNode UI. Add the same kind-to-type mapping used in DesignView and MonitorView. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): enable compositor layer interactions in Design View - Wire up onParamChange in useCompositorLayers so layers are interactive when editing pipelines in Design View (not just live sessions) - Trigger YAML regeneration on param changes with feedback loop guard - Defer YAML regeneration via queueMicrotask to avoid React setState during render warning Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: format useCompositorLayers.ts Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat: add Video Compositor (MoQ Stream) pipeline template Adds a sample dynamic pipeline that composites two colorbars sources through the compositor node and streams the result via MoQ (WebTransport). Pipeline chain: colorbars_bg + colorbars_pip → compositor (2 inputs) → VP9 encoder → MoQ peer (output broadcast). Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(ui): Complete compositor UX improvements - Fix YAML pipeline loading: infer compositor output_pixel_format (I420/Rgba8) - Fix wildcard null matching in canConnectPair for dimension compatibility - Fix map-style needs parsing in YAML pipeline loader ({pin: node} format) - Replace Z-index slider with numeric input + bring forward/backward buttons - Add text overlay management UI (add/remove with default params) - Add image overlay management UI integrated with asset upload system - Add collapsible Output Preview panel in Monitor View Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): prevent compositor node overlap in auto-layout Add estimated height (500px) for video::compositor node kind to prevent overlapping with downstream nodes during auto-layout positioning. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(ui): compositor UX improvements - layer rendering, floating preview, YAML highlighting - Render text overlays with actual text content and scaled font in compositor canvas - Render image overlays as distinct colored rectangles with icon badge - Apply golden-angle hue spacing for visual layer distinction - Add layer name overlay and dimension labels on each layer - Add per-layer controls: opacity slider, rotation slider, z-index with stack buttons - Replace title tooltips with SKTooltip in overlay remove buttons - Add useCompositorSelection hook for cross-component layer selection sync - Highlight selected compositor layer's YAML range in YamlPane - Redesign output preview from bottom-docked panel to floating draggable window - Style numeric inputs with design system tokens (borders, focus ring, hidden spinners) - Fix ESLint import ordering and unused variable warnings Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor,vp9): eliminate format bounce and add SSE2 SIMD (#62) * perf(compositor,vp9): eliminate format bounce and add SSE2 SIMD - Compositor now always outputs RGBA8, removing the per-frame rgba8_to_i420_buf call from the compositing thread (~24% CPU). - VP9 encoder accepts both RGBA8 and I420 inputs; when receiving RGBA8 it converts to I420 on its own blocking thread, pipelining the conversion with the compositor's next frame. - Added SSE2 SIMD paths for i420_to_rgba8_buf and rgba8_to_i420_buf (Y-plane and chroma subsampling), processing 8 pixels per iteration with scalar fallback for tail pixels and non-x86 targets. - Removed try_i420_passthrough optimisation (no longer needed since the compositor always works in RGBA8). - Simplified CompositeResult to a single rgba_data field. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): fix i16 overflow in SIMD color conversions, use i32 arithmetic Both i420_to_rgba8_row_sse2 and rgba8_to_y_row_sse2 now use 32-bit arithmetic throughout to avoid silent truncation when BT.601 coefficients (298, 409, 516, 129) are multiplied by pixel values (0-255). The products can reach ~131,580, well beyond i16::MAX (32,767). Changes: - i420_to_rgba8_row_sse2: process 4 pixels/iter in i32 (was 8 in i16) - rgba8_to_y_row_sse2: process 4 pixels/iter in i32 (was 8 in i16) - New mul32_sse2 helper: SSE2-compatible i32 multiply via _mm_mul_epu32 with even/odd lane shuffling - Add 3 equivalence tests: SIMD-vs-scalar for both directions + roundtrip Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): fix chroma averaging bug and remove stale output_pixel_format - rgba8_to_chroma_row_sse2: simplified horizontal pair extraction to _mm_packs_epi32(r_sum, zero) instead of complex mask-shift-pack that dropped every other 2x2 chroma block (causing visible vertical banding) - Removed stale output_pixel_format: i420 from video_compositor_demo.yml and compositor benchmark (now silently ignored, always outputs RGBA8) - Removed unused imports (_mm_srli_si128, _mm_set_epi32) from chroma fn Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply cargo fmt to chroma averaging fix Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * feat: NV12 as default video format (#63) * feat: add NV12 as default video format - Add PixelFormat::Nv12 variant to core type system with VideoLayout plane math for 2-plane NV12 (Y + interleaved UV) - Update parse_pixel_format to accept 'nv12' format string - Change default pixel_format across nodes from 'i420' to 'nv12' - VP9 decoder: output NV12 by interleaving libvpx's I420 U/V planes - VP9 encoder: accept NV12 via VPX_IMG_FMT_NV12 (zero-conversion path) - Compositor: add nv12_to_rgba8_buf conversion with SSE2 SIMD reuse - Colorbars: add NV12 generation and time-stamp support - Update test utilities for NV12 chroma initialization NV12's interleaved UV plane is more cache-friendly for RGBA conversion kernels, and the encoder can consume NV12 directly without format conversion, making the single-layer passthrough path faster. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix: validate chroma stride before cast, update decoder description to NV12 Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf: use thread-local scratch buffers in nv12_to_rgba8_buf SIMD path Replace per-row Vec allocations with thread_local! RefCell<Vec<u8>> scratch buffers that are allocated once per thread and reused across rows. Eliminates ~2×height heap allocations per frame (e.g. 2160 allocs/frame at 1080p) while preserving correctness under both sequential and rayon parallel execution. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(nodes): eliminate NV12↔RGBA8 conversion overhead in compositor pipeline (#65) * perf(nodes): eliminate NV12↔RGBA8 conversion overhead in compositor pipeline Two targeted fixes for the hot paths identified in CPU profiling: 1. nv12_to_rgba8_buf: Replace thread-local scratch buffer deinterleaving with a dedicated nv12_to_rgba8_row_sse2 kernel that reads NV12's interleaved UV plane directly. Eliminates per-row RefCell borrow_mut and LocalKey::try_with overhead (~50% of profiled CPU time). 2. VP9 encoder: Convert RGBA8→NV12 instead of RGBA8→I420 so the encoder can feed VPX_IMG_FMT_NV12 to libvpx directly, matching the pipeline's native NV12 format and avoiding the I420 detour (~28% of profiled CPU). Adds rgba8_to_nv12_buf() for the new output path. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(nodes): add SSE4.1 fast-path kernels for color-space conversion Replace 7-instruction mul32_sse2 emulation with single-instruction _mm_mullo_epi32 in three hot kernels identified by pprof (mul32_sse2 was 26.49% CPU): - i420_to_rgba8_row_sse41: 6 native multiplies per pixel - nv12_to_rgba8_row_sse41: 6 native multiplies per pixel - rgba8_to_y_row_sse41: 3 native multiplies per pixel All _buf callers now runtime-detect SSE4.1 and prefer it, falling back to SSE2 on older hardware. Identical color-space math; no functional change. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * docs: update VP9 encoder registration to mention NV12 input format Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> Co-authored-by: staging-devin-ai-integration[bot] <166158716+staging-devin-ai-integration[bot]@users.noreply.github.com> * perf: enable thin LTO, codegen-units=1, and target-cpu=native for profiling (#66) - Add lto = "thin" and codegen-units = 1 to [profile.release] in Cargo.toml for cross-crate inlining and maximum LLVM optimisation. - Add -C target-cpu=native to build-skit-profiling and skit-profiling so CPU profiles reflect host-tuned codegen. - Add new build-skit-native target for max-perf local builds tuned to the build host's microarchitecture. - Docker/CI release builds remain portable (no target-cpu=native in Cargo.toml or .cargo/config.toml). Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * perf(compositor): implement findings 1+4, 2, 5, and 3 for video compositor optimizations (#67) * perf(compositor): implement findings 1+4, 2, 5, and 3 for video compositor optimizations - Finding 1+4: Incremental stepper + interior AA skip in scale_blit_rgba_rotated Replace per-pixel multiplies with adds by stepping local_x/local_y incrementally. When min_dist >= 2.0, batch interior pixels skipping coverage math entirely. - Finding 2: NV12 interleaved-output SIMD chroma kernel (SSE2) New rgba8_to_chroma_row_nv12_sse2 with interleaved U/V store via _mm_unpacklo_epi8. Wired into rgba8_to_nv12_buf conversion path. - Finding 5: Rayon row chunking (8-row blocks) Replace per-row rayon tasks with 8-row chunks across all dispatch sites (rotated blit, i420/nv12 conversions) to reduce scheduling overhead. - Finding 3: AVX2 Y-plane kernel (8 pixels/iter) New rgba8_to_y_row_avx2 using 256-bit registers, wired with AVX2 > SSE4.1 > SSE2 priority in both I420 and NV12 Y-plane conversion paths. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): use copy_nonoverlapping instead of _mm_storeu_si128 in NV12 chroma kernel _mm_storeu_si128 writes 16 bytes but only 8 are valid (4 UV pairs), causing out-of-bounds writes on the last chroma row. Use copy_nonoverlapping with explicit 8-byte length, matching the I420 chroma kernel's store pattern. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): bound dst_region slice and add rationale comments for cast suppressions - Bound dst_region to bb_rows * row_stride to avoid dispatching rayon tasks beyond the bounding box rows. - Add explanatory comments for #[allow(clippy::cast_possible_wrap)] per AGENTS.md linting discipline requirements. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): early-out when bounding box is empty (off-screen rect) When a rotated layer is entirely off-screen, bb_y1 < bb_y0 or bb_x1 < bb_x0. The subtraction (bb_y1 - bb_y0) as usize would wrap to a huge value, causing a panic on the bounded dst_region slice. Add an early return guard before the subtraction. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * style(compositor): fix clippy and rustfmt lint issues in SIMD kernels - Remove empty line between doc comment blocks for rayon_chunk_rows - Replace manual div_ceil with .div_ceil() method - Apply rustfmt formatting to AVX2 import blocks and comments Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): cache available_parallelism in LazyLock for rayon_chunk_rows available_parallelism() issues a sysconf(_SC_NPROCESSORS_ONLN) syscall on every call (~40µs on Linux). Cache the result in a static LazyLock so subsequent calls are a simple atomic load (~0.7ns). Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style(compositor): apply rustfmt to LazyLock closure Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): correct AVX2 lane-crossing in chroma kernels _mm256_packs_epi32 operates per 128-bit lane, so packing two different source registers (r_v_a, r_v_b) scrambles the element order — qwords 1 and 2 are swapped. This caused chroma samples to be spatially displaced, producing visible horizontal tearing artifacts on composited overlays. Fix: apply _mm256_permute4x64_epi64(result, 0xD8) (vpermq) immediately after each cross-source pack to restore sequential element ordering. Both rgba8_to_chroma_row_nv12_avx2 and rgba8_to_chroma_row_avx2 are fixed (3 permutes each — one per R, G, B channel). Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * feat(colorbars): default output pixel format to RGBA8 RGBA8 is more convenient and efficient for compositing workflows since the compositor operates in RGBA8 internally — no format conversion needed. Pipelines that feed colorbars directly into VP9 (without a compositor) now specify pixel_format: nv12 explicitly to avoid an unnecessary RGBA8→NV12 conversion inside the encoder. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): add AVX2 NV12→RGBA8 kernel and hoist CPU feature detection - Implement nv12_to_rgba8_row_avx2: processes 8 pixels per iteration (double SSE4.1 throughput) using 256-bit i32 arithmetic with drop-to-SSE pack/interleave to avoid lane-crossing issues - Wire AVX2 kernel into nv12_to_rgba8_buf with SSE4.1 tail handling - Hoist is_x86_feature_detected!() calls outside per-row closures in all 4 conversion functions (i420_to_rgba8_buf, nv12_to_rgba8_buf, rgba8_to_i420_buf, rgba8_to_nv12_buf) to detect once at function start and capture in variables Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): algorithmic optimizations, SSE2 blend + microbenchmark (#68) * perf(compositor): add compositor-only microbenchmark Adds a standalone benchmark that measures composite_frame() in isolation (no VP9 encode, no mux, no async runtime overhead). Scenarios: - 1/2/4 layers RGBA - Mixed I420+RGBA and NV12+RGBA (measures conversion overhead) - Rotation (measures rotated blit path) - Static layers (same Arc each frame, for future cache-hit measurement) Runs at 640x480, 1280x720, 1920x1080 by default. Baseline results on this VM (8 logical CPUs): 1920x1080 1-layer-rgba: ~728 fps (1.37 ms/frame) 1920x1080 2-layer-rgba-pip: ~601 fps (1.66 ms/frame) 1920x1080 2-layer-i420+rgba: ~427 fps (2.34 ms/frame) 1920x1080 2-layer-nv12+rgba: ~478 fps (2.09 ms/frame) 1920x1080 2-layer-rgba-rotated: ~470 fps (2.13 ms/frame) Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply rustfmt to compositor_only benchmark Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): cache YUV→RGBA conversions + skip canvas clear Optimization 1: Add ConversionCache that tracks Arc pointer identity per layer slot. When the source Arc<PooledVideoData> hasn't changed between frames, the cached RGBA data is reused (zero conversion cost). Replaces the old i420_scratch buffer approach. Optimization 2: Skip buf.fill(0) canvas clear when the first visible layer is opaque, unrotated, and fully covers the canvas dimensions. Saves one full-canvas memset per frame in the common case. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): precompute x-map to eliminate per-pixel division Optimization 3: Replace per-pixel `(dx + src_col_skip) * sw / rw` integer division in blit_row_opaque/blit_row_alpha with a single precomputed lookup table (x_map) built once per scale_blit_rgba call. Each destination column now does a table lookup instead of a division, removing O(width * height) divisions per layer per frame. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): add identity-scale fast path for 1:1 opaque blits Optimization 4: When source dimensions match the destination rect, opacity is 1.0, and there's no clipping offset, bypass the x-map lookup entirely. For fully-opaque source rows, use bulk memcpy (copy_from_slice). For rows with semi-transparent pixels, use a simplified per-pixel blend without the scaling indirection. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): pre-scale image overlays at decode time Optimization 5: When a decoded image overlay's native dimensions differ from its target rect, pre-scale it once using nearest-neighbor at config/update time. This ensures the per-frame blit_overlay call hits the identity-scale fast path (memcpy) instead of re-scaling every frame. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): cache layer configs and skip per-frame sort Optimization 6: Extract per-slot layer config resolution and z-order sorting into a rebuild_layer_cache() function that runs only when config or pin set changes (UpdateParams, pin add/remove, channel close). Per-frame layer building now uses the cached resolved configs and pre-sorted draw order instead of doing HashMap lookups and sort_by on every frame. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(frame_pool): preallocate video pool buckets at startup Optimization 7: Change video_default() from with_buckets (lazy, no preallocation) to preallocated_with_max with 2 buffers per bucket. This avoids cold-start allocation misses for the first few frames, matching the existing audio_default() pattern. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style(compositor): fix clippy warnings from optimization changes - Use map_or instead of match/if-let-else in ConversionCache and first_layer_covers_canvas - Allow expect_used with safety comment in get_or_convert - Allow dead_code on LayerSnapshot::z_index (sorting moved upstream) - Allow needless_range_loop in blit_row_opaque/blit_row_alpha (dx used for both x_map index and dst offset) - Allow cast_possible_truncation on idx as i32 in rebuild_layer_cache Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor): address correctness + bench issues from review - Fix #1 (High): skip-clear now validates source pixel alpha (all pixels must have alpha==255) before skipping canvas clear. Prevents blending against stale pooled buffer data when RGBA source has transparency. - Fix #2 (Medium): conversion cache slot indices now use position in the full layers slice (with None holes) via two-pass resolution, so cache keys stay stable when slots gain/lose frames. - Fix #3 (Medium): benchmark now calls real composite_frame() kernel instead of reimplementing compositing inline. Exercises all kernel optimizations (cache, clear-skip, identity fast-path, x-map). - Fix Devin Review: revert video pool preallocation (was allocating ~121MB across all bucket sizes at startup). Restored lazy allocation. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * style: apply rustfmt to fix formatting Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * perf(compositor): SSE2 blend, alpha-scan cache, bench pool, lazy prealloc Fix 4 remaining performance findings: 1. High: Add SSE2 SIMD fast path for RGBA blend loops (blit_row_opaque, blit_row_alpha). Processes 4 pixels at a time with fast-paths for fully-opaque (direct copy) and fully-transparent (skip) source pixels. 2. Medium: Optimize alpha scan in clear-skip check — skip scan entirely for I420/NV12 layers (always alpha=255 after conversion), cache scan result by Arc pointer identity for RGBA layers. 3. Medium: Pass VideoFramePool to bench_composite instead of None, so benchmark exercises pool reuse like production. 4. Low-Medium: Lazy preallocate on first bucket use — when a bucket is first hit, allocate one extra buffer so the second get() is a hit. Also: inline clear-skip logic to fix borrow checker conflict, remove unused first_layer_covers_canvas function, add clippy suppression rationale comments for needless_range_loop. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * feat(compositor-ui): UX improvements for video compositor (#69) * feat(compositor-ui): UX improvements for video compositor - Fix preview panel drag bug (inverted Y-axis) - Fix text/image overlay dragging (extend drag to all layer types) - Add visibility toggle (eye icon) to all layer types - Unified layer list showing all layers sorted by z-index - Visibility-aware canvas rendering (hidden layers show faintly) - Conditional preview panel (only shows when there's something to preview) - Fullscreen toggle for preview panel - Preview activation button in Monitor view top bar (watch-only MoQ) Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): address 5 UX issues from testing feedback 1. Fix rotation stretching: add transform-origin: center center to LayerBox 2. In-place text editing: double-click text overlay to edit inline on canvas - Disable resize handles for text layers (size controlled by font-size) 3. Fix overlay removal caching: add timestamp guard to prevent stale params from overwriting local overlay changes during sync 4. Consolidate overlays into unified layers: merge overlay add/remove/edit controls into UnifiedLayerList, remove separate OverlayList from render 5. Resizable preview panel: add left/top edge drag handles to resize panel Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): remove text layer padding and use indexed labels Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): address review bot findings (escape cancel, visibility sync, memo deps) Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): guard double-commit on Enter and preserve overlay visibility on re-sync Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): preserve video layer opacity on visibility re-sync Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): clear selection on overlay removal to prevent stale selectedLayerId Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): use committedRef to prevent double-fire on Enter+blur in text edit (#71) Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * fix(video): preserve aspect ratio in compositor rotation and stream rendering (#70) * feat(nodes): preserve aspect ratio in rotated compositor layers Replace the stretch-to-fill mapping in scale_blit_rgba_rotated with a uniform-scale fit (object-fit: contain). When a rotated layer's source aspect ratio differs from the destination rect the image is now centred with transparent padding instead of being distorted. - Compute fit_scale = min(rw/sw, rh/sh) for uniform scaling - Use content-local half-widths (half_cw, half_ch) for the bounding box and edge anti-aliasing distances - Map content coords → source pixels via inv_fit_scale instead of normalising through the full rect dimensions - Add test_rotated_blit_preserves_aspect_ratio unit test - Update sample pipeline comment to document the behaviour Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(nodes): account for rotation angle in compositor fit scale The previous fit scale only considered the source-to-rect aspect ratio mismatch, which had no effect when both shared the same ratio (e.g. 4:3 source in a 4:3 rect). The real issue is that a rotated rectangle's axis-aligned bounding box is larger than the original, so the content must be scaled down to fit within the rect after rotation. New formula: rotated_bb_w = src_w·|cos θ| + src_h·|sin θ| rotated_bb_h = src_w·|sin θ| + src_h·|cos θ| fit_scale = min(rect_w / rotated_bb_w, rect_h / rotated_bb_h) This ensures the rotated content fits entirely within the destination rect with transparent padding, producing a natural-looking rotation regardless of aspect ratio match. Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): derive canvas aspect ratio from stream dimensions Replace hardcoded aspectRatio CSS values ('4 / 3' in StreamView, '16 / 9' in OutputPreviewPanel) with a dynamic value observed from the canvas element's width/height attributes. The new useCanvasAspectRatio hook uses a MutationObserver to track attribute changes made by the Hang video renderer, ensuring the displayed aspect ratio always matches the actual video stream. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): use auto width on stream canvas to prevent stretching When the container is wider than what the aspect ratio allows at maxHeight 480px, width: 100% caused the canvas to stretch horizontally. Changed to width: auto + max-width: 100% so the browser computes the width from the aspect ratio and height constraint, then centers the canvas with margin: 0 auto. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(ui): skip default canvas dimensions in aspect ratio hook Check canvas.getAttribute('width'/'height') before reading the .width/.height properties. A newly-created canvas has default intrinsic dimensions of 300x150 which would be reported as a valid 2:1 ratio, causing a layout shift before the first video frame arrives. Now the hook returns undefined until the Hang renderer explicitly sets the canvas attributes. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(nodes): unify 0° fast path to use aspect-ratio-preserving fit The near-zero rotation fast path now computes a fitted sub-rect (uniform scale + centering) before delegating to scale_blit_rgba, matching the rotated path's aspect-ratio-preserving behaviour. This eliminates the behavioural discontinuity where 0° rotation would stretch-to-fill while any non-zero rotation would letterbox. Animating rotation through 0° no longer causes a visual pop. Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Signed-off-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): address 7 UX issues in compositor node (#72) * fix(compositor-ui): address 7 UX issues in compositor node Issue #1: Click outside text layer commits inline edit - Add document.activeElement.blur() in handlePaneClick before deselecting - Add useEffect on TextOverlayLayer watching isSelected to commit on deselect Issue #2: Preview panel resizable from all four edges - Add ResizeEdgeRight and ResizeEdgeBottom styled components - Extend handleResizeStart edge type to support right/bottom - Update resizeRef type to match Issue #3: Monitor view preview extracts MoQ peer settings from pipeline - Find transport::moq::peer node in pipeline and extract gateway_path/output_broadcast - Set correct serverUrl and outputBroadcast before connecting - Import updateUrlPath utility Issue #4: Deep-compare layer state to prevent position jumps on selection change - Skip setLayers/setTextOverlays/setImageOverlays when merged state is structurally equal - Prevents stale server-echoed values from causing visual glitches Issue #5: Rotate mouse delta for rotated layer resize handles - Transform (dx, dy) by -rotationDegrees in computeUpdatedLayer - Makes resize handles behave naturally regardless of layer rotation Issue #6: Visual separator between layer list and per-layer controls - Add borderTop and paddingTop to LayerInfoRow for both video and text controls Issue #7: Text layers support opacity and rotation sliders - Add rotationDegrees field to TextOverlayState, parse/serialize rotation_degrees - Add rotation transform to TextOverlayLayer canvas rendering - Replace numeric opacity input with slider matching video layer controls - Add rotation slider for text layers Co-Authored-By: Claudio Costa <cstcld91@gmail.com> * fix(compositor-ui): fix preview drag, text state flicker, overlay throttling, multiline text - OutputPreviewPanel: make panel body draggable (not just header) with cursor: grab styling so preview behaves like other canvas nodes - useCompositorLayers: add throttledOverlayCommit for text/image overlay updates (sliders, etc.) to prevent flooding the server on every tick; increase overlay commit guard from 1.5s to 3s to prevent stale params from overwriting local state; arm guard immediately in updateTextOverlay and updateImageOverlay - CompositorCanvas: change InlineTextInput from <input> to <textarea> for multiline text editing; Enter inserts newline, Ctrl/Cmd+Enter commits; add white-space: pre-wrap and word-break to text content rendering; add ResizeHandles to TextOverlayLayer when selected - CompositorNode: change OverlayTextInput to <textarea> with vertical resize support for multiline text in node controls panel Co-Authored-By: Claudio Costa <cstcld91@gmail.com> --------- Co-authored-by: StreamKit Devin <devin@streamkit.dev> Co-authored-by: Claudio Costa <cstcld91@gmail.com> * feat(compositor): consolidate overlay transforms + unified z-sorted blit loop Backend consolidation: - Add OverlayTransform struct with #[serde(flatten)] for wire-compatible common spatial/visual properties (rect, opacity, rotation_degrees, z_index) - Add rotation_degrees and z_index fields to DecodedOverlay - Replace three separate blit loops (video, image, text) with a single z-sorted BlitItem loop, enabling interleaved layer ordering - Remove dead blit_overlay() function (replaced by unified path) - Add SSE2 batched blending for rotated blit interi…
1 parent e7e289b commit 7864251

258 files changed

Lines changed: 39047 additions & 6396 deletions

File tree

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.
Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
<!--
2+
SPDX-FileCopyrightText: © 2025 StreamKit Contributors
3+
4+
SPDX-License-Identifier: MPL-2.0
5+
-->
6+
7+
# Testing the Video Compositor UI
8+
9+
## Overview
10+
The video compositor node (`video::compositor`) has a visual canvas in the Design view where layers (input video, text overlays, image overlays) can be positioned, resized, and configured.
11+
12+
## Setup
13+
1. Start backend: `SK_SERVER__MOQ_GATEWAY_URL=http://127.0.0.1:4545/moq SK_SERVER__ADDRESS=127.0.0.1:4545 just skit`
14+
2. Start UI: `just ui`
15+
3. Navigate to `http://localhost:3045/design`
16+
17+
## Loading a Compositor Pipeline
18+
- The easiest way to get a compositor node on the canvas is to load a pre-built sample
19+
- Click the **Samples** tab in the left sidebar
20+
- Click **"Video Compositor (MoQ Stream)"** to load a pipeline with a compositor node and two colorbars inputs
21+
- Another option: **"Webcam PiP (MoQ Stream)"** includes a compositor with a text overlay already configured
22+
- Drag-and-drop from the Nodes library is difficult with browser automation; prefer loading samples
23+
24+
## Adding Text Overlays
25+
- The compositor node has a **Layers** panel with an **"Add"** button
26+
- The Add button may be offscreen in the node; use JavaScript `scrollIntoView()` + `click()` to interact with it
27+
- Click **Add > Text** to add a text overlay
28+
- The new text layer appears in the Layers list and on the canvas with a dashed bounding box
29+
30+
## Configuring Text Overlays
31+
- Click a text layer in the Layers list to select it
32+
- The inspector panel shows:
33+
- **Content**: textarea for the text string
34+
- **Size**: number input for font size (in pixels)
35+
- **Font**: dropdown with DejaVu font variants
36+
- **Color**: color picker
37+
- **Opacity**: slider
38+
- **Rotation**: preset buttons (0/90/180/270) and slider
39+
- **Mirror**: horizontal/vertical toggle buttons
40+
- Use JavaScript `nativeInputValueSetter` + dispatching `input`/`change` events for React-controlled inputs
41+
42+
## Key Behavior to Verify
43+
- Text should be aligned to the **top-left** of its bounding box (matching backend rendering from origin 0,0)
44+
- Font size should be proportional to the bounding box (no double-scaling from CSS transform)
45+
- The `CanvasInner` component applies `transform: scale(scale)` via CSS, so overlay content should use raw pixel values
46+
- Bounding box should auto-expand height to fit text content
47+
48+
## Relevant Files
49+
- `ui/src/components/CompositorCanvas.tsx` — Main canvas component with overlay rendering
50+
- `ui/src/hooks/useCompositorLayers.ts` — Layer state management hook
51+
- `crates/nodes/src/video/compositor/overlay.rs` — Backend text rendering (uses fontdue)

.github/workflows/e2e.yml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,9 @@ jobs:
4646

4747
- uses: Swatinem/rust-cache@v2
4848

49+
- name: Install system dependencies (libvpx for VP9 video support)
50+
run: sudo apt-get update && sudo apt-get install -y libvpx-dev
51+
4952
- name: Build skit (debug)
5053
run: cargo build -p streamkit-server --bin skit
5154

.github/workflows/skit.yml

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -36,6 +36,9 @@ jobs:
3636
bun install --frozen-lockfile
3737
bun run build
3838
39+
- name: Install system dependencies (libvpx for VP9 video support)
40+
run: sudo apt-get update && sudo apt-get install -y libvpx-dev
41+
3942
- name: Install Rust toolchain
4043
uses: dtolnay/rust-toolchain@master
4144
with:
@@ -89,6 +92,9 @@ jobs:
8992
bun install --frozen-lockfile
9093
bun run build
9194
95+
- name: Install system dependencies (libvpx for VP9 video support)
96+
run: sudo apt-get update && sudo apt-get install -y libvpx-dev
97+
9298
- name: Install Rust toolchain
9399
uses: dtolnay/rust-toolchain@master
94100
with:
@@ -129,6 +135,9 @@ jobs:
129135
bun install --frozen-lockfile
130136
bun run build
131137
138+
- name: Install system dependencies (libvpx for VP9 video support)
139+
run: sudo apt-get update && sudo apt-get install -y libvpx-dev
140+
132141
- name: Install Rust toolchain
133142
uses: dtolnay/rust-toolchain@master
134143
with:

AGENTS.md

Lines changed: 105 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,111 @@ Agent-assisted contributions are welcome, but should be **supervised** and **rev
2727
- Follow `CONTRIBUTING.md` (DCO sign-off, Conventional Commits, SPDX headers where applicable).
2828
- **Linting discipline**: Do not blindly suppress lint warnings or errors with ignore/exception rules. Instead, consider refactoring or improving the code to address the underlying issue. If an exception is truly necessary, it **must** include a comment explaining the rationale.
2929

30+
## Running E2E tests
31+
32+
End-to-end tests live in `e2e/` and use Playwright (Chromium, headless).
33+
34+
1. **Build the UI** and **start the server** in one terminal:
35+
36+
```bash
37+
just build-ui && SK_SERVER__MOQ_GATEWAY_URL=http://127.0.0.1:4545/moq SK_SERVER__ADDRESS=127.0.0.1:4545 just skit
38+
```
39+
40+
2. **Run the tests** in a second terminal:
41+
42+
```bash
43+
just e2e-external http://localhost:4545
44+
```
45+
46+
### Headless-browser pitfalls
47+
48+
- Playwright runs headless Chromium with a default 1280×720 viewport.
49+
Elements rendered below the fold are **not visible** to
50+
`IntersectionObserver`. If a test relies on an element being observed
51+
(e.g. the `<canvas>` used by the MoQ video renderer), scroll it into
52+
view first:
53+
54+
```ts
55+
const canvas = page.locator('canvas');
56+
await canvas.scrollIntoViewIfNeeded();
57+
```
58+
59+
- The `@moq/watch` `Video.Renderer` enables the `Video.Decoder` (and
60+
therefore the `video/data` MoQ subscription) **only** when the canvas is
61+
intersecting. Forgetting to scroll will result in a permanently black
62+
canvas.
63+
64+
## Render performance profiling
65+
66+
StreamKit ships a two-layer profiling infrastructure for detecting render
67+
regressions — particularly **cascade re-renders** where a slider interaction
68+
(opacity, rotation) triggers expensive re-renders in unrelated memoized
69+
components (`UnifiedLayerList`, `OpacityControl`, `RotationControl`, etc.).
70+
71+
### When to use this
72+
73+
- **After touching compositor hooks or components** (`useCompositorLayers`,
74+
`CompositorNode`, or any `React.memo`'d sub-component): run the perf tests
75+
to verify you haven't broken memoization barriers.
76+
- **When optimising render performance**: use the baseline comparison to
77+
measure before/after render counts and durations.
78+
- **In CI**: Layer 1 tests run automatically via `just perf-ui` and will fail
79+
if render counts regress beyond the 2σ threshold stored in the baseline.
80+
81+
### Layer 1 — Component-level regression tests (Vitest)
82+
83+
Fast, deterministic tests that measure hook/component render counts in
84+
happy-dom. No browser required.
85+
86+
```bash
87+
just perf-ui # runs all *.perf.test.* files
88+
```
89+
90+
Key files:
91+
92+
| File | Purpose |
93+
|------|---------|
94+
| `ui/src/test/perf/measure.ts` | `measureRenders()` (components) and `measureHookRenders()` (hooks) |
95+
| `ui/src/test/perf/compare.ts` | Baseline read/write, 2σ comparison, report formatting |
96+
| `ui/src/hooks/useCompositorLayers.render-perf.test.ts` | Cascade re-render regression tests |
97+
| `perf-baselines.json` (repo root) | Baseline snapshot — committed to track regressions over time |
98+
99+
**Cascade detection pattern**: the render-perf tests simulate rapid slider
100+
drags (20 ticks of opacity/rotation) and assert that total render count stays
101+
within a budget (currently ≤ 30). If callback references become unstable
102+
(e.g. `layers` array in deps instead of `selectedLayerKind`), React.memo
103+
barriers break and the render count will blow past the budget, failing the
104+
test.
105+
106+
### Layer 2 — Interaction-level profiling (Playwright + React.Profiler)
107+
108+
Real-browser profiling for dev builds. Components wrapped with
109+
`React.Profiler` push metrics to `window.__PERF_DATA__` which Playwright
110+
tests can read via `page.evaluate()`.
111+
112+
```bash
113+
just perf-e2e # requires: just skit + just ui (dev server at :3045)
114+
```
115+
116+
Key files:
117+
118+
| File | Purpose |
119+
|------|---------|
120+
| `ui/src/perf/profiler.ts` | Dev-only `PerfProfiler` wrapper + `window.__PERF_DATA__` store |
121+
| `e2e/tests/perf-helpers.ts` | `capturePerfData()` / `resetPerfData()` Playwright utilities |
122+
| `e2e/tests/compositor-perf.spec.ts` | E2E test: creates PiP session, drags all sliders, asserts render budget |
123+
124+
Use Layer 2 when you need real paint/layout timing or want to profile
125+
interactions end-to-end with actual browser rendering.
126+
127+
### Updating the baseline
128+
129+
Run `just perf-ui` — the last test in the render-perf suite writes a fresh
130+
`perf-baselines.json` (gated behind `UPDATE_PERF_BASELINE=1`, which the
131+
`test:perf` script sets automatically). Regular `just test-ui` runs compare
132+
against the baseline but never overwrite it. Commit the updated baseline
133+
alongside your changes so future runs compare against the new numbers.
134+
30135
## Docker notes
31136

32137
- Official images are built from `Dockerfile` (CPU) and `Dockerfile.gpu` (GPU-tagged) via `.github/workflows/docker.yml`.

0 commit comments

Comments
 (0)