feat(llc): hifi stereo playout (Android and iOS)#1173
Conversation
📝 WalkthroughWalkthroughAdds HiFi audio support (new SfuAudioBitrate/SfuAudioBitrateProfile, audio bitrate profiles in publish options, and RtcTrackPublishOptions), an AudioConfigurationPolicy framework with runtime reinitialization, SDP stereo mirroring, SDP type refactor, multiple RTC/manager API changes, and bumps stream_webrtc_flutter to ^2.2.5. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant SV as StreamVideo
participant RTCMD as RtcMediaDeviceNotifier
participant RTCM as RtcManager
participant WC as WebRTC/PeerConnection
App->>SV: Initialize(apiKey, options with audioConfigurationPolicy)
SV->>RTCMD: reinitializeAudioConfiguration(policy)
RTCMD->>RTCMD: Apply bypassVoiceProcessing<br/>and platform configs
RTCMD->>RTCMD: Set stereo playout (iOS) / Android config
RTCMD->>RTCMD: Listen for audio processing state changes
App->>SV: Call.setAudioBitrateProfile(profile)
SV->>RTCM: Update CallState.audioBitrateProfile
RTCM->>RTCM: changeDefaultAudioConstraints(channelCount, NS/EC/AGC)
RTCM->>WC: Create/update audio track with new constraints
WC->>WC: Apply encodings / senders update
sequenceDiagram
participant Local as Local Peer
participant LocalEditor as SdpEditor(Local)
participant Remote as Remote Peer
participant RemoteEditor as SdpEditor(Remote)
participant WC as WebRTC
Local->>WC: Create offer
WC->>LocalEditor: Edit offer SDP
Local->>Remote: Send offer SDP
Remote->>WC: Create answer
WC->>RemoteEditor: Edit answer SDP (offer context)
RemoteEditor->>RemoteEditor: Mirror sprop-stereo from offer → answer
Remote->>Local: Send answer SDP with mirrored stereo
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
packages/stream_video/lib/src/sfu/sfu_extensions.dart (1)
317-325:⚠️ Potential issue | 🟡 MinorRemove the redundant duplicate
useSingleLayerkeyThe map now contains both
'useSingleLayer'(camelCase, line 318, pre-existing) and'use_single_layer'(snake_case, line 321, newly added) for the same field. In Dart maps, these are two distinct keys, so both will appear in the output. The rest of this map uses camelCase, so the new snake_case entry is inconsistent. The camelCase key should be removed, or vice-versa.🐛 Proposed fix
- 'useSingleLayer': useSingleLayer, 'bitrate': bitrate, 'fps': fps, - 'use_single_layer': useSingleLayer, + 'useSingleLayer': useSingleLayer, 'audio_bitrate_profiles': audioBitrateProfiles🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/sfu/sfu_extensions.dart` around lines 317 - 325, The map serialization in toJson contains a duplicate field for the same property: it includes both 'useSingleLayer' and the inconsistent snake_case 'use_single_layer' entries referencing useSingleLayer; remove the redundant snake_case entry (i.e. delete the "'use_single_layer': useSingleLayer" map entry) so the map consistently uses camelCase like the other keys (see useSingleLayer, maxTemporalLayers, audioBitrateProfiles.toJson).
🧹 Nitpick comments (19)
packages/stream_video_flutter/example/lib/stream_video_options.dart (1)
6-15: Consider cachingremoteandlocalasstatic finalfields for consistency.The
remoteandlocalgetters now allocate a freshStreamVideoOptionsinstance on every access. The three platform-specific options (_web,_android,_ios) are alreadystatic finalfields and are therefore allocated only once. Makingremoteandlocalconsistent with that pattern avoids the unnecessary per-call allocation.♻️ Proposed refactor
mixin DefaultVideoOptions { - static StreamVideoOptions get remote { - return StreamVideoOptions(); - } - - static StreamVideoOptions get local { - return StreamVideoOptions( - coordinatorRpcUrl: 'http://10.0.0.53:3030/video', - coordinatorWsUrl: 'ws://10.0.0.53:8800/video/connect', - ); - } + static final StreamVideoOptions remote = StreamVideoOptions(); + + static final StreamVideoOptions local = StreamVideoOptions( + coordinatorRpcUrl: 'http://10.0.0.53:3030/video', + coordinatorWsUrl: 'ws://10.0.0.53:8800/video/connect', + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video_flutter/example/lib/stream_video_options.dart` around lines 6 - 15, The remote and local getters (StreamVideoOptions.get remote and StreamVideoOptions.get local) currently create a new StreamVideoOptions on every access; change them to static final fields (e.g., static final StreamVideoOptions remote = StreamVideoOptions(...); and static final StreamVideoOptions local = StreamVideoOptions(...);) so the instances are allocated once like the existing _web, _android, and _ios fields; update references to use those fields and remove the getter implementations.packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart (1)
27-29: New fields excluded from value equality — consider adding==/hashCode.
RtcTrackInfohas no==/hashCodeoverride, sodtx,stereo, andred(along with all existing fields) will never participate in equality comparisons. If any part of the codebase comparesRtcTrackInfoinstances by value (e.g., in change-detection or deduplication logic), the new fields will be silently ignored.♻️ Proposed addition of `==` and `hashCode`
final bool dtx; final bool stereo; final bool red; + + `@override` + bool operator ==(Object other) => + identical(this, other) || + other is RtcTrackInfo && + runtimeType == other.runtimeType && + trackId == other.trackId && + trackType == other.trackType && + mid == other.mid && + muted == other.muted && + publishOptionId == other.publishOptionId && + dtx == other.dtx && + stereo == other.stereo && + red == other.red; + + `@override` + int get hashCode => Object.hash( + trackId, trackType, mid, muted, publishOptionId, dtx, stereo, red);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart` around lines 27 - 29, RtcTrackInfo’s new boolean fields (dtx, stereo, red) aren’t included in value equality because the class lacks operator== and hashCode; update RtcTrackInfo to override operator== and hashCode to include all existing fields plus dtx, stereo, and red (or use Equatable if preferred) so comparisons/deduplication account for the new properties, referencing the RtcTrackInfo class and its field names when making the change.packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart (1)
66-72: Use a named parameter to match the mixin's established convention.Every other method in this mixin accepts named (
required) parameters —setCallLive({required bool isLive}),setCallRecording({required bool isRecording}),setCallAudioProcessing({required bool isAudioProcessing}), etc.setAudioBitrateProfileis the only one that takes a positional argument, making call sites less self-documenting and inconsistent with the rest of the API surface.♻️ Proposed fix
- void setAudioBitrateProfile(SfuAudioBitrateProfile profile) { - _logger.v(() => '[setAudioBitrateProfile] profile:$profile'); - - state = state.copyWith( - audioBitrateProfile: profile, - ); - } + void setAudioBitrateProfile({required SfuAudioBitrateProfile profile}) { + _logger.v(() => '[setAudioBitrateProfile] profile:$profile'); + + state = state.copyWith( + audioBitrateProfile: profile, + ); + }Update the internal call site in
packages/stream_video/lib/src/call/call.dart:3159:- _stateManager.setAudioBitrateProfile(profile); + _stateManager.setAudioBitrateProfile(profile: profile);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart` around lines 66 - 72, The method setAudioBitrateProfile currently takes a positional parameter which breaks the mixin's convention of using named required parameters; change its signature to setAudioBitrateProfile({required SfuAudioBitrateProfile profile}) and update its internal usages (e.g., the call site in Call where setAudioBitrateProfile(...) is invoked) to pass profile by name; keep the logger and state.copyWith(audioBitrateProfile: profile) behavior unchanged and ensure imports/types remain valid for SfuAudioBitrateProfile.packages/stream_video/test/src/core/client_state_test.dart (2)
279-281: Remove the commented-outFuture.delayedblock.This debug artifact adds noise to the test. If the delay is genuinely not needed, the lines should be deleted outright.
🧹 Proposed cleanup
- // await Future.delayed( - // const Duration(milliseconds: 10)); // Allow stream to emit -🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/test/src/core/client_state_test.dart` around lines 279 - 281, Remove the commented-out debug artifact: delete the two commented lines containing "await Future.delayed(const Duration(milliseconds: 10)); // Allow stream to emit" from the test (client_state_test.dart) so the test no longer contains the unused delay; ensure no other commented delay remnants remain in the same test function.
62-65: Useconst StreamVideoOptions.constant(...)for consistency with other test files in this PR.The PR introduces a
StreamVideoOptions.constant(...)named constructor and migrates other test files (e.g.,call_allow_multiple_active_calls_test.dart,call_apply_settings_test.dart) to useconst StreamVideoOptions.constant(...). These twosetUpblocks inclient_state_test.dartuse the non-constStreamVideoOptions(...)constructor instead, creating an inconsistency.♻️ Proposed update for both setUp blocks
- options: StreamVideoOptions( + options: const StreamVideoOptions.constant( allowMultipleActiveCalls: false, autoConnect: false, ),- options: StreamVideoOptions( + options: const StreamVideoOptions.constant( allowMultipleActiveCalls: true, autoConnect: false, ),Also applies to: 183-186
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/test/src/core/client_state_test.dart` around lines 62 - 65, Two setUp blocks in client_state_test.dart use the non-const constructor StreamVideoOptions(...) which is inconsistent with other tests; replace those calls with the new const named constructor StreamVideoOptions.constant(...) in both occurrences (the setUp blocks that currently pass allowMultipleActiveCalls: false, autoConnect: false) so the options are compile-time constants and match other tests; update the two places referencing StreamVideoOptions to use StreamVideoOptions.constant(...) with the same parameters.packages/stream_video/test/src/call/fixtures/call_test_helpers.dart (1)
169-169: Consider usingStreamVideoOptions.constant()for consistency with the broader test suite.Other test files (like
call_apply_settings_test.dartandcall_allow_multiple_active_calls_test.dart) useconst StreamVideoOptions.constant(). Since this is a mock stub that simply returns a value, the choice between the plain constructor and the constant constructor doesn't affect test correctness, but aligning with the established pattern would improve consistency across the test suite.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/test/src/call/fixtures/call_test_helpers.dart` at line 169, The test stub returns a non-const StreamVideoOptions instance; update the mock setup that calls when(() => streamVideo.options).thenReturn(...) to return the constant variant instead for consistency with other tests—use StreamVideoOptions.constant() (as a const) when supplying the mocked value so the mocked streamVideo.options matches the suite convention.packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart (2)
41-82: Consider extracting the shared audio-section parsing logic.
_findSpropStereoMidsand_addStereoToMatchingMidsboth independently walk the SDP to detect audiom=sections, extracta=mid:, and resolve the Opus payload type via rtpmap. Extracting this traversal into a shared helper (e.g., an iterator or visitor that yields per-section context) would reduce the duplication and make it easier to maintain if the parsing needs evolve.Also applies to: 84-126
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart` around lines 41 - 82, Both _findSpropStereoMids and _addStereoToMatchingMids duplicate the SDP audio-section traversal (detecting "m=audio", reading a=mid, resolving Opus payload via rtpmap and fmtp); extract that traversal into a shared helper (e.g., iterateAudioSections or parseAudioSections) that yields a small context object or stream per audio section containing currentMid, resolved opusPayloadType, and section lines/attributes so both functions can reuse it; update _findSpropStereoMids and _addStereoToMatchingMids to call this helper and perform their specific checks (sprop-stereo detection or adding stereo fmtp) using the provided context, keeping existing symbols (rtpmapParser, fmtpParser, AudioCodec.opus) intact.
41-56: Ordering assumption:a=mid:appears beforea=fmtp:in the offer.The method requires
currentMidto be non-null (line 75) when it encounters the fmtp line withsprop-stereo=1. If a conforming SDP ever placesa=fmtp:beforea=mid:, the mid would be missed. This is safe in practice with current WebRTC implementations, but worth a brief inline comment documenting the assumption.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart` around lines 41 - 56, The function _findSpropStereoMids assumes 'a=mid:' appears before the corresponding 'a=fmtp:' so currentMid is set when encountering fmtp lines; add a brief inline comment inside _findSpropStereoMids (near the variables currentMid and opusPayloadType initialization or before parsing fmtp lines) stating this ordering assumption and its limitation (i.e., if a=fmtp appears before a=mid the MID may be missed), so future readers/maintainers are aware of the constraint.packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart (1)
27-27: PreferObject.hashover XOR forhashCodeThe XOR approach gives poor distribution:
(a ^ b) == (b ^ a)makes(profile, bitrate)and(bitrate-as-profile, profile-as-bitrate)collide, andx ^ x == 0means any two equalprofile.hashCodevalues collapse to the same slot. With a nullablebitrate, every instance with the same profile andnullbitrate hashes identically regardless of other state.Object.hashis the idiomatic Dart 2.14+ replacement.♻️ Proposed fix
`@override` - int get hashCode => profile.hashCode ^ bitrate.hashCode; + int get hashCode => Object.hash(profile, bitrate);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart` at line 27, Replace the XOR-based hash in the hashCode getter with Dart's Object.hash to improve distribution: update the getter for hashCode in the class that uses profile and bitrate to return Object.hash(profile, bitrate) instead of profile.hashCode ^ bitrate.hashCode; this handles nullable bitrate correctly and removes the collision issues from XOR.packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart (2)
120-149:StreamSubscriptionfrom_listenForAudioProcessingStateChangesis not storedThe subscription returned by
rtc.eventStream.listen(...)is discarded. While the singleton lifecycle makes this acceptable in practice, the lack of anonErrorhandler means any stream error will silently terminate this listener — futureonAudioProcessingStateChangedevents would go unobserved with no log or recovery path.♻️ Suggested fix
+ StreamSubscription<Map<String, dynamic>>? _audioProcessingSubscription; void _listenForAudioProcessingStateChanges() { - rtc.eventStream.listen((data) { + _audioProcessingSubscription = rtc.eventStream.listen((data) { if (data.isEmpty) return; // ... + }, onError: (Object e, StackTrace st) { + _tracer.trace('audioProcessingStateChangedError', {'error': e.toString()}); }); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart` around lines 120 - 149, In _listenForAudioProcessingStateChanges, the StreamSubscription returned by rtc.eventStream.listen is dropped and has no onError handler; capture it (introduce a field like _audioProcessingStateSub), assign the subscription returned by rtc.eventStream.listen(..., onError: (e, st) { _tracer.trace('audioProcessingStateError', {'error': e.toString(), 'stack': st.toString()}); }) and add proper cancellation in the notifier's dispose/close method (call _audioProcessingStateSub?.cancel()) so errors are logged and the subscription can be cleaned up.
255-275: ThesetiOSStereoPlayoutPreferredmethod exists instream_webrtc_flutter ^2.2.5— no compilation error.However, the direct coupling of stereo playout to
policy.bypassVoiceProcessingis architecturally inflexible. A future policy that bypasses voice processing without wanting stereo output would incorrectly enable it. Consider decoupling the stereo preference from the bypass flag by adding a dedicatedstereoPlayoutPreferredfield toAudioConfigurationPolicy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart` around lines 255 - 275, The reinitializeAudioConfiguration currently ties iOS stereo preference to policy.bypassVoiceProcessing which is inflexible; add a new boolean field (e.g., stereoPlayoutPreferred) to AudioConfigurationPolicy and use that instead of policy.bypassVoiceProcessing in rtc.Helper.setiOSStereoPlayoutPreferred; update AudioConfigurationPolicy constructors/defaults and any callers to supply the new field, and ensure getAndroidConfiguration()/toMap() remain unchanged so Android behavior is preserved while iOS stereo is controlled explicitly via the new property.packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart (1)
3-9:RtcTrackPublishOptionsis missing==/hashCode/toString()All fields are
finalbut==relies on object identity, so twoRtcTrackPublishOptionswith identicalaudioBitrateProfilewill not compare equal. If the transceiver or cache layer ever checks options equality (e.g., to avoid redundant reinit), this will silently fail.♻️ Suggested additions
+import 'package:meta/meta.dart'; + +@immutable class RtcTrackPublishOptions { const RtcTrackPublishOptions({ this.audioBitrateProfile = SfuAudioBitrateProfile.voiceStandard, }); final SfuAudioBitrateProfile audioBitrateProfile; + + `@override` + bool operator ==(Object other) => + identical(this, other) || + other is RtcTrackPublishOptions && + runtimeType == other.runtimeType && + audioBitrateProfile == other.audioBitrateProfile; + + `@override` + int get hashCode => audioBitrateProfile.hashCode; + + `@override` + String toString() => + 'RtcTrackPublishOptions{audioBitrateProfile: $audioBitrateProfile}'; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart` around lines 3 - 9, RtcTrackPublishOptions currently uses identity equality, so two instances with the same audioBitrateProfile won't compare equal; implement value equality by overriding operator== and hashCode (and add a helpful toString()) in the RtcTrackPublishOptions class so comparisons and caching work correctly. Specifically, in class RtcTrackPublishOptions override bool operator==(Object other) to check runtimeType and compare audioBitrateProfile, compute hashCode from audioBitrateProfile, and implement toString() to include audioBitrateProfile; alternatively you may implement Equatable or a data class but ensure operator==, hashCode and toString cover the audioBitrateProfile field.packages/stream_video/lib/src/call_state.dart (1)
327-337:audioBitrateProfilemissing fromtoString()The new field is included in
props(equality/hashCode) but omitted fromtoString(), reducing debuggability.🔧 Suggested fix
`@override` String toString() { return 'CallState(status: $status, currentUserId: $currentUserId,' ' callCid: $callCid, createdByUser: $createdByUser,' ' sessionId: $sessionId, isRecording: $isRecording,' + ' audioBitrateProfile: $audioBitrateProfile,' ' settings: $settings, egress: $egress, '🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/call_state.dart` around lines 327 - 337, CallState.toString() is missing the audioBitrateProfile field (even though it’s included in props), so update the CallState.toString() method to include audioBitrateProfile in the returned string alongside the other fields (e.g., append 'audioBitrateProfile: $audioBitrateProfile' in the same concatenation), ensuring the representation matches the style/ordering of the existing fields for consistent debuggability.packages/stream_video/lib/src/models/audio_configuration_policy.dart (2)
170-184:defaultToSpeakerparameter is accepted but ignored inViewerAudioPolicyandHiFiAudioPolicy.In
BroadcasterAudioPolicy,defaultToSpeakerconditionally controls the audio mode and category options. But inViewerAudioPolicy(andHiFiAudioPolicy), the parameter is never referenced — the speaker is always defaulted via theconstoption set. SincegetAppleConfigurationis defined on the sealed base class with this parameter, callers may expect it to have an effect. Consider documenting this directly on the method or in a code comment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart` around lines 170 - 184, ViewerAudioPolicy and HiFiAudioPolicy declare getAppleConfiguration(bool defaultToSpeaker = false) but ignore the defaultToSpeaker argument (unlike BroadcasterAudioPolicy), leading to surprising behavior; update ViewerAudioPolicy.getAppleConfiguration and HiFiAudioPolicy.getAppleConfiguration to respect the defaultToSpeaker flag (toggle rtc.AppleAudioMode and include or omit rtc.AppleAudioCategoryOption.defaultToSpeaker in appleAudioCategoryOptions consistent with BroadcasterAudioPolicy), or if intentional, add a clear code comment on those methods (and/or update the base sealed class doc) stating that defaultToSpeaker is intentionally ignored for these policies so callers know the behavior.
167-202: Identical platform configurations are intentional by design.
ViewerAudioPolicyandHiFiAudioPolicyare documented to have identical platform-level audio configurations (line 51: "Same audio settings as Viewer Policy"). The differentiation occurs at the Audio Device Module layer—the HiFi policy enables stereo input/output on Android and stereo playout on iOS, which is already documented in the class-level docstrings (lines 214-217) and requirements section (lines 219-222). No code change needed; the existing documentation sufficiently explains this design.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart` around lines 167 - 202, The platform-level audio settings in ViewerAudioPolicy and HiFiAudioPolicy are intentionally identical; confirm and, if missing or unclear, update the class docstrings for ViewerAudioPolicy and HiFiAudioPolicy to explicitly state that platform configurations match and the differences are implemented at the Audio Device Module (ADM) layer (e.g., HiFi enables stereo input/output on Android and stereo playout on iOS); locate the classes ViewerAudioPolicy and HiFiAudioPolicy and add a concise sentence in their docstrings and requirements section clarifying this design so reviewers won’t expect differing platform configs.packages/stream_video/lib/src/webrtc/codecs_helper.dart (1)
190-206: NewfindOptimalAudioLayers— looks correct, minor robustness note.The fallback map covers all three
SfuAudioBitrateProfileenum values, somaxBitrateis always non-null today. If the enum is ever extended without updating this map,maxBitratewould benull, resulting in anRTCRtpEncodingwith no bitrate cap (unconstrained). Consider adding a default fallback.Suggested defensive fallback
final maxBitrate = profileConfig?.bitrate ?? { SfuAudioBitrateProfile.voiceStandard: AudioBitrate.voiceStandard, SfuAudioBitrateProfile.voiceHighQuality: AudioBitrate.voiceHighQuality, SfuAudioBitrateProfile.musicHighQuality: AudioBitrate.musicHighQuality, - }[trackPublishOptions.audioBitrateProfile]; + }[trackPublishOptions.audioBitrateProfile] ?? + AudioBitrate.voiceStandard;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/codecs_helper.dart` around lines 190 - 206, findOptimalAudioLayers currently computes maxBitrate from profileConfig or a map keyed by SfuAudioBitrateProfile, but if the enum is extended the map lookup could yield null leaving RTCRtpEncoding unconstrained; update findOptimalAudioLayers to ensure maxBitrate is non-null by providing a defensive default (e.g., fall back to AudioBitrate.voiceStandard or another agreed safe constant) when both profileConfig?.bitrate and the map lookup are null so the returned rtc.RTCRtpEncoding always has an explicit maxBitrate.packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart (1)
342-370: Extension naming:SfuAudioBitrateExtension2is unclear.Both extensions share a similar prefix but operate on different types (
AudioBitrateProfilevsAudioBitrate). Consider renaming for clarity, e.g.,SfuAudioBitrateProfileExtensionandSfuAudioBitrateExtension.Proposed rename
-extension SfuAudioBitrateExtension on sfu_models.AudioBitrateProfile { +extension SfuAudioBitrateProfileExtension on sfu_models.AudioBitrateProfile { SfuAudioBitrateProfile toDomain() { ... } } -extension SfuAudioBitrateExtension2 on sfu_models.AudioBitrate { +extension SfuAudioBitrateExtension on sfu_models.AudioBitrate { SfuAudioBitrate toDomain() { ... } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart` around lines 342 - 370, Rename the unclear extension SfuAudioBitrateExtension2 to a clear, type-reflective name (e.g., SfuAudioBitrateExtension) so it matches the pattern used by the profile extension and improves readability; update the extension declaration for the sfu_models.AudioBitrate type (extension SfuAudioBitrateExtension2 on sfu_models.AudioBitrate) to the new name and adjust any usages/imports referencing SfuAudioBitrateExtension2 and the constructor SfuAudioBitrate(toDomain) accordingly to the new extension name to avoid broken references.packages/stream_video/lib/src/call/call.dart (1)
3143-3178: Add error handling tounawaited()calls insetAudioBitrateProfileto avoid silently discarding errors.
startAudioProcessing()andstopAudioProcessing()both returnFuture<Result<None>>and can fail, but are called viaunawaited()without error handling at lines 3153–3155. This silently discards any error information. Add.catchError()handlers to log failures, consistent with the pattern used elsewhere in this file (see lines 1325 and 1861–1862).Similarly,
changeDefaultAudioConstraints()returnsFuture<void>but is not awaited at line 3168, though the null-coalescing operators mean it will silently no-op if the session is not connected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/call/call.dart` around lines 3143 - 3178, The unawaited calls in setAudioBitrateProfile are discarding failures: wrap the futures returned by startAudioProcessing() and stopAudioProcessing() with .catchError(...) to log any Result.error using the same logger pattern used elsewhere (so unawaited(startAudioProcessing().catchError(...)) and unawaited(stopAudioProcessing().catchError(...))) and include contextual messages referencing setAudioBitrateProfile; likewise, handle the unawaited rtc.Helper.setiOSStereoPlayoutPreferred(stereo) and the _session?.rtcManager?.changeDefaultAudioConstraints(...) call by appending .catchError(...) to log any thrown errors (or, if changeDefaultAudioConstraints returns a Result-like Future, inspect and log errors) so no async failure is silently dropped.packages/stream_video/lib/src/webrtc/rtc_manager.dart (1)
212-225:changeDefaultAudioConstraintsmutes all audio tracks then unmutes — potential issue with stale references.The
localAudioTrackscollection is captured fromtracks.valuesbefore the mute loop. AftermuteTrackupdatestracks[trackId](line 1050), the second loop still iterates the original (now-stale) track objects to gettrackId. This works becausetrackIdis immutable andunmuteTrackre-fetches fromtracks, but the lack of error handling here means a failure in any mute/unmute will propagate unhandled to the caller. Consider wrapping in try/catch or at least documenting that callers must handle errors.Also, there's no guard against concurrent calls to this method — if called twice rapidly, the interleaved mute/unmute sequences could produce unexpected state.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 212 - 225, changeDefaultAudioConstraints currently captures localAudioTracks once and runs mute then unmute loops without error handling or concurrency protection; update it to (1) serialize operations per-track (iterate tracks.values.whereType<RtcLocalAudioTrack>() and for each track perform muteTrack(...) then unmuteTrack(...) so you always resolve current state via mute/unmute which re-reads tracks), (2) wrap each mute/unmute call in try/catch to handle/log errors (or collect and rethrow an aggregated error) instead of letting a single failure bubble uncaught, and (3) guard against concurrent invocations by adding a short-lived lock/flag (e.g. a private boolean _changingAudioConstraints or a mutex) checked/set at method entry and cleared on exit to prevent interleaved calls; reference changeDefaultAudioConstraints, muteTrack, unmuteTrack, tracks, and RtcLocalAudioTrack when making these changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@packages/stream_video/CHANGELOG.md`:
- Line 8: Update the CHANGELOG line that currently says "(Android only)" to
reflect that the PR adds stereo support for both Android and iOS or clarify the
Android-only parts: edit the sentence referencing `audioConfigurationPolicy` in
`StreamVideoOptions` to mention iOS support too (or explicitly state that only
stereo ADM capture remains Android-only), and ensure `AudioConfigurationPolicy`
and Apple-specific settings like `bypassVoiceProcessing` are noted as
iOS-relevant where appropriate so the changelog matches the implementation.
- Line 15: The CHANGELOG entry containing the markdown line "See the
[documentation](https://getstream.io/video/docs/flutter/guides/camera-and-microphone/high-fidelity-audio/)
for details." points to a 404; remove that link or replace the sentence with a
neutral placeholder such as "Documentation coming soon" (or remove the sentence
entirely) so the CHANGELOG does not reference an unpublished page; update the
line in CHANGELOG.md accordingly.
In `@packages/stream_video/lib/src/stream_video.dart`:
- Around line 192-201: The call to
RtcMediaDeviceNotifier.instance.reinitializeAudioConfiguration can throw so the
.then branch may never run and webrtcInitializationCompleter remains unresolved;
update the block around reinitializeAudioConfiguration (the call in
stream_video.dart that references CurrentPlatform and
RtcMediaDeviceNotifier.instance) to handle errors by adding a .catchError or
wrapping the awaited call in try/catch, ensure that on error you call
webrtcInitializationCompleter.completeError(error) (or at minimum complete the
completer) and log the error, and keep the existing logic for
precacheGenericSdps/RtcManager.cacheGenericSdp inside the success path so SDP
caching only runs when initialization succeeds.
In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 737-738: The variable name `cashedTransceiver` is a typo and
should be `cachedTransceiver`; update all occurrences in publishAudioTrack and
publishVideoTrack to use `cachedTransceiver` instead of `cashedTransceiver`
(e.g., the assignment from transceiversManager.get(option)?.transceiver and any
subsequent uses), ensuring references inside those methods (and any related null
checks or logging) are renamed consistently.
---
Outside diff comments:
In `@packages/stream_video/lib/src/sfu/sfu_extensions.dart`:
- Around line 317-325: The map serialization in toJson contains a duplicate
field for the same property: it includes both 'useSingleLayer' and the
inconsistent snake_case 'use_single_layer' entries referencing useSingleLayer;
remove the redundant snake_case entry (i.e. delete the "'use_single_layer':
useSingleLayer" map entry) so the map consistently uses camelCase like the other
keys (see useSingleLayer, maxTemporalLayers, audioBitrateProfiles.toJson).
---
Nitpick comments:
In `@packages/stream_video_flutter/example/lib/stream_video_options.dart`:
- Around line 6-15: The remote and local getters (StreamVideoOptions.get remote
and StreamVideoOptions.get local) currently create a new StreamVideoOptions on
every access; change them to static final fields (e.g., static final
StreamVideoOptions remote = StreamVideoOptions(...); and static final
StreamVideoOptions local = StreamVideoOptions(...);) so the instances are
allocated once like the existing _web, _android, and _ios fields; update
references to use those fields and remove the getter implementations.
In `@packages/stream_video/lib/src/call_state.dart`:
- Around line 327-337: CallState.toString() is missing the audioBitrateProfile
field (even though it’s included in props), so update the CallState.toString()
method to include audioBitrateProfile in the returned string alongside the other
fields (e.g., append 'audioBitrateProfile: $audioBitrateProfile' in the same
concatenation), ensuring the representation matches the style/ordering of the
existing fields for consistent debuggability.
In `@packages/stream_video/lib/src/call/call.dart`:
- Around line 3143-3178: The unawaited calls in setAudioBitrateProfile are
discarding failures: wrap the futures returned by startAudioProcessing() and
stopAudioProcessing() with .catchError(...) to log any Result.error using the
same logger pattern used elsewhere (so
unawaited(startAudioProcessing().catchError(...)) and
unawaited(stopAudioProcessing().catchError(...))) and include contextual
messages referencing setAudioBitrateProfile; likewise, handle the unawaited
rtc.Helper.setiOSStereoPlayoutPreferred(stereo) and the
_session?.rtcManager?.changeDefaultAudioConstraints(...) call by appending
.catchError(...) to log any thrown errors (or, if changeDefaultAudioConstraints
returns a Result-like Future, inspect and log errors) so no async failure is
silently dropped.
In
`@packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart`:
- Around line 66-72: The method setAudioBitrateProfile currently takes a
positional parameter which breaks the mixin's convention of using named required
parameters; change its signature to setAudioBitrateProfile({required
SfuAudioBitrateProfile profile}) and update its internal usages (e.g., the call
site in Call where setAudioBitrateProfile(...) is invoked) to pass profile by
name; keep the logger and state.copyWith(audioBitrateProfile: profile) behavior
unchanged and ensure imports/types remain valid for SfuAudioBitrateProfile.
In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart`:
- Around line 170-184: ViewerAudioPolicy and HiFiAudioPolicy declare
getAppleConfiguration(bool defaultToSpeaker = false) but ignore the
defaultToSpeaker argument (unlike BroadcasterAudioPolicy), leading to surprising
behavior; update ViewerAudioPolicy.getAppleConfiguration and
HiFiAudioPolicy.getAppleConfiguration to respect the defaultToSpeaker flag
(toggle rtc.AppleAudioMode and include or omit
rtc.AppleAudioCategoryOption.defaultToSpeaker in appleAudioCategoryOptions
consistent with BroadcasterAudioPolicy), or if intentional, add a clear code
comment on those methods (and/or update the base sealed class doc) stating that
defaultToSpeaker is intentionally ignored for these policies so callers know the
behavior.
- Around line 167-202: The platform-level audio settings in ViewerAudioPolicy
and HiFiAudioPolicy are intentionally identical; confirm and, if missing or
unclear, update the class docstrings for ViewerAudioPolicy and HiFiAudioPolicy
to explicitly state that platform configurations match and the differences are
implemented at the Audio Device Module (ADM) layer (e.g., HiFi enables stereo
input/output on Android and stereo playout on iOS); locate the classes
ViewerAudioPolicy and HiFiAudioPolicy and add a concise sentence in their
docstrings and requirements section clarifying this design so reviewers won’t
expect differing platform configs.
In
`@packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart`:
- Around line 342-370: Rename the unclear extension SfuAudioBitrateExtension2 to
a clear, type-reflective name (e.g., SfuAudioBitrateExtension) so it matches the
pattern used by the profile extension and improves readability; update the
extension declaration for the sfu_models.AudioBitrate type (extension
SfuAudioBitrateExtension2 on sfu_models.AudioBitrate) to the new name and adjust
any usages/imports referencing SfuAudioBitrateExtension2 and the constructor
SfuAudioBitrate(toDomain) accordingly to the new extension name to avoid broken
references.
In `@packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart`:
- Line 27: Replace the XOR-based hash in the hashCode getter with Dart's
Object.hash to improve distribution: update the getter for hashCode in the class
that uses profile and bitrate to return Object.hash(profile, bitrate) instead of
profile.hashCode ^ bitrate.hashCode; this handles nullable bitrate correctly and
removes the collision issues from XOR.
In `@packages/stream_video/lib/src/webrtc/codecs_helper.dart`:
- Around line 190-206: findOptimalAudioLayers currently computes maxBitrate from
profileConfig or a map keyed by SfuAudioBitrateProfile, but if the enum is
extended the map lookup could yield null leaving RTCRtpEncoding unconstrained;
update findOptimalAudioLayers to ensure maxBitrate is non-null by providing a
defensive default (e.g., fall back to AudioBitrate.voiceStandard or another
agreed safe constant) when both profileConfig?.bitrate and the map lookup are
null so the returned rtc.RTCRtpEncoding always has an explicit maxBitrate.
In `@packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart`:
- Around line 27-29: RtcTrackInfo’s new boolean fields (dtx, stereo, red) aren’t
included in value equality because the class lacks operator== and hashCode;
update RtcTrackInfo to override operator== and hashCode to include all existing
fields plus dtx, stereo, and red (or use Equatable if preferred) so
comparisons/deduplication account for the new properties, referencing the
RtcTrackInfo class and its field names when making the change.
In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 212-225: changeDefaultAudioConstraints currently captures
localAudioTracks once and runs mute then unmute loops without error handling or
concurrency protection; update it to (1) serialize operations per-track (iterate
tracks.values.whereType<RtcLocalAudioTrack>() and for each track perform
muteTrack(...) then unmuteTrack(...) so you always resolve current state via
mute/unmute which re-reads tracks), (2) wrap each mute/unmute call in try/catch
to handle/log errors (or collect and rethrow an aggregated error) instead of
letting a single failure bubble uncaught, and (3) guard against concurrent
invocations by adding a short-lived lock/flag (e.g. a private boolean
_changingAudioConstraints or a mutex) checked/set at method entry and cleared on
exit to prevent interleaved calls; reference changeDefaultAudioConstraints,
muteTrack, unmuteTrack, tracks, and RtcLocalAudioTrack when making these
changes.
In
`@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart`:
- Around line 120-149: In _listenForAudioProcessingStateChanges, the
StreamSubscription returned by rtc.eventStream.listen is dropped and has no
onError handler; capture it (introduce a field like _audioProcessingStateSub),
assign the subscription returned by rtc.eventStream.listen(..., onError: (e, st)
{ _tracer.trace('audioProcessingStateError', {'error': e.toString(), 'stack':
st.toString()}); }) and add proper cancellation in the notifier's dispose/close
method (call _audioProcessingStateSub?.cancel()) so errors are logged and the
subscription can be cleaned up.
- Around line 255-275: The reinitializeAudioConfiguration currently ties iOS
stereo preference to policy.bypassVoiceProcessing which is inflexible; add a new
boolean field (e.g., stereoPlayoutPreferred) to AudioConfigurationPolicy and use
that instead of policy.bypassVoiceProcessing in
rtc.Helper.setiOSStereoPlayoutPreferred; update AudioConfigurationPolicy
constructors/defaults and any callers to supply the new field, and ensure
getAndroidConfiguration()/toMap() remain unchanged so Android behavior is
preserved while iOS stereo is controlled explicitly via the new property.
In
`@packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart`:
- Around line 3-9: RtcTrackPublishOptions currently uses identity equality, so
two instances with the same audioBitrateProfile won't compare equal; implement
value equality by overriding operator== and hashCode (and add a helpful
toString()) in the RtcTrackPublishOptions class so comparisons and caching work
correctly. Specifically, in class RtcTrackPublishOptions override bool
operator==(Object other) to check runtimeType and compare audioBitrateProfile,
compute hashCode from audioBitrateProfile, and implement toString() to include
audioBitrateProfile; alternatively you may implement Equatable or a data class
but ensure operator==, hashCode and toString cover the audioBitrateProfile
field.
In
`@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart`:
- Around line 41-82: Both _findSpropStereoMids and _addStereoToMatchingMids
duplicate the SDP audio-section traversal (detecting "m=audio", reading a=mid,
resolving Opus payload via rtpmap and fmtp); extract that traversal into a
shared helper (e.g., iterateAudioSections or parseAudioSections) that yields a
small context object or stream per audio section containing currentMid, resolved
opusPayloadType, and section lines/attributes so both functions can reuse it;
update _findSpropStereoMids and _addStereoToMatchingMids to call this helper and
perform their specific checks (sprop-stereo detection or adding stereo fmtp)
using the provided context, keeping existing symbols (rtpmapParser, fmtpParser,
AudioCodec.opus) intact.
- Around line 41-56: The function _findSpropStereoMids assumes 'a=mid:' appears
before the corresponding 'a=fmtp:' so currentMid is set when encountering fmtp
lines; add a brief inline comment inside _findSpropStereoMids (near the
variables currentMid and opusPayloadType initialization or before parsing fmtp
lines) stating this ordering assumption and its limitation (i.e., if a=fmtp
appears before a=mid the MID may be missed), so future readers/maintainers are
aware of the constraint.
In `@packages/stream_video/test/src/call/fixtures/call_test_helpers.dart`:
- Line 169: The test stub returns a non-const StreamVideoOptions instance;
update the mock setup that calls when(() => streamVideo.options).thenReturn(...)
to return the constant variant instead for consistency with other tests—use
StreamVideoOptions.constant() (as a const) when supplying the mocked value so
the mocked streamVideo.options matches the suite convention.
In `@packages/stream_video/test/src/core/client_state_test.dart`:
- Around line 279-281: Remove the commented-out debug artifact: delete the two
commented lines containing "await Future.delayed(const Duration(milliseconds:
10)); // Allow stream to emit" from the test (client_state_test.dart) so the
test no longer contains the unused delay; ensure no other commented delay
remnants remain in the same test function.
- Around line 62-65: Two setUp blocks in client_state_test.dart use the
non-const constructor StreamVideoOptions(...) which is inconsistent with other
tests; replace those calls with the new const named constructor
StreamVideoOptions.constant(...) in both occurrences (the setUp blocks that
currently pass allowMultipleActiveCalls: false, autoConnect: false) so the
options are compile-time constants and match other tests; update the two places
referencing StreamVideoOptions to use StreamVideoOptions.constant(...) with the
same parameters.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
packages/stream_video/lib/src/webrtc/rtc_manager.dart (2)
212-225: Materialize the lazy iterable before mutatingtracksto avoid fragile iteration.
localAudioTracksis a lazy view overtracks.values. BothmuteTrackandunmuteTrackreplace entries in thetracksmap during iteration. While this currently works because Dart'sLinkedHashMapdoesn't throw on value replacement (no structural change), re-iterating the same lazy iterable in the second loop yields the replaced objects rather than the originals—which happens to be correct here but is subtle and fragile. Calling.toList()makes the intent explicit and guards against future changes (e.g., ifmuteTrackever adds/removes keys).Proposed fix
Future<void> changeDefaultAudioConstraints( AudioConstraints constraints, ) async { _defaultAudioConstraints = constraints; - final localAudioTracks = tracks.values.whereType<RtcLocalAudioTrack>(); + final localAudioTracks = + tracks.values.whereType<RtcLocalAudioTrack>().toList(); for (final track in localAudioTracks) { await muteTrack(trackId: track.trackId, stopTrackOnMute: true); } for (final track in localAudioTracks) { await unmuteTrack(trackId: track.trackId); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 212 - 225, changeDefaultAudioConstraints is iterating a lazy view over tracks.values which gets mutated by muteTrack/unmuteTrack; materialize the iterable to a list first to avoid subtle re-iteration of replaced entries. In changeDefaultAudioConstraints, replace the lazy localAudioTracks assignment with a concrete list (e.g., final localAudioTracks = tracks.values.whereType<RtcLocalAudioTrack>().toList()) and then iterate that list for both await muteTrack(trackId: ...) and await unmuteTrack(trackId: ...); keep the update to _defaultAudioConstraints as-is.
998-1031:_updateAudioPublishOptionsalways updatestrackPublishOptionseven when unchanged.Line 1029 unconditionally sets
transceiverBundle.trackPublishOptions = optionsfor every matching publish option, even whenaudioBitrateProfilehasn't changed. This is harmless but slightly misleading — it bypasses the change-detection guard at line 1009.Move the assignment inside the change guard
if (current.audioBitrateProfile != options.audioBitrateProfile) { // ... encoding update logic ... + transceiverBundle.trackPublishOptions = options; } - - transceiverBundle.trackPublishOptions = options; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 998 - 1031, _updateAudioPublishOptions currently assigns transceiverBundle.trackPublishOptions = options for every matching publishOption even when nothing changed; change it so the assignment only happens when the audioBitrateProfile actually differs (i.e., move the transceiverBundle.trackPublishOptions = options statement inside the existing if (current.audioBitrateProfile != options.audioBitrateProfile) block), keeping the existing encoder/parameter update logic (transceiver.sender.setParameters) untouched and preserving behavior when encodings.isEmpty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@packages/stream_video/lib/src/stream_video.dart`:
- Around line 192-204: The onError handler for
RtcMediaDeviceNotifier.instance.reinitializeAudioConfiguration currently
swallows errors and simply completes webrtcInitializationCompleter; update the
handler to log the error using the existing logger (e.g., _logger.e or similar)
so initialization failures are observable, while still ensuring
webrtcInitializationCompleter.complete() is called; locate the call site in
stream_video.dart around reinitializeAudioConfiguration and modify the
.onError((error, stack) { ... }) block to log error and stack before completing
the completer.
In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 737-770: The code had a misspelled variable name
"cashedTransceiver" which has been corrected to "cachedTransceiver"; ensure all
occurrences are consistently renamed (including where
transceiversManager.get(option)?.transceiver is stored and used) inside
publishAudioTrack and the other similar block (the second occurrence around the
_addTransceiver/_updateTransceiver logic). Verify references to
cachedTransceiver, calls to transceiversManager.get, and logging lines (e.g.,
_logger.v(...) that prints cached transceiver) all use the corrected identifier
so there are no remaining typos or unresolved symbol references.
---
Nitpick comments:
In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 212-225: changeDefaultAudioConstraints is iterating a lazy view
over tracks.values which gets mutated by muteTrack/unmuteTrack; materialize the
iterable to a list first to avoid subtle re-iteration of replaced entries. In
changeDefaultAudioConstraints, replace the lazy localAudioTracks assignment with
a concrete list (e.g., final localAudioTracks =
tracks.values.whereType<RtcLocalAudioTrack>().toList()) and then iterate that
list for both await muteTrack(trackId: ...) and await unmuteTrack(trackId: ...);
keep the update to _defaultAudioConstraints as-is.
- Around line 998-1031: _updateAudioPublishOptions currently assigns
transceiverBundle.trackPublishOptions = options for every matching publishOption
even when nothing changed; change it so the assignment only happens when the
audioBitrateProfile actually differs (i.e., move the
transceiverBundle.trackPublishOptions = options statement inside the existing if
(current.audioBitrateProfile != options.audioBitrateProfile) block), keeping the
existing encoder/parameter update logic (transceiver.sender.setParameters)
untouched and preserving behavior when encodings.isEmpty.
| Future<void> _updateTransceiver( | ||
| RTCRtpTransceiver transceiver, | ||
| RtcLocalTrack? track, | ||
| SfuTrackType trackType, { | ||
| RtcTrackPublishOptions? trackPublishOptions, | ||
| }) async { | ||
| final previousTrack = transceiver.sender.track; | ||
|
|
||
| // don't stop the track if we are re-publishing the same track | ||
| if (previousTrack != null && previousTrack != track?.mediaTrack) { | ||
| await previousTrack.stop(); | ||
| } | ||
|
|
||
| await transceiver.sender.replaceTrack(track?.mediaTrack); | ||
|
|
||
| if (track is RtcLocalAudioTrack) { | ||
| await _updateAudioPublishOptions( | ||
| track.trackType, | ||
| trackPublishOptions ?? const RtcTrackPublishOptions(), | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find all call sites of _updateTransceiver
rg -n '_updateTransceiver' --type=dart -B 2 -A 5 packages/stream_video/lib/src/webrtc/Repository: GetStream/stream-video-flutter
Length of output: 2814
🏁 Script executed:
#!/bin/bash
# Find _addTransceiver to understand cloning behavior
rg -n '_addTransceiver' --type=dart -B 3 -A 15 packages/stream_video/lib/src/webrtc/rtc_manager.dartRepository: GetStream/stream-video-flutter
Length of output: 2820
🏁 Script executed:
#!/bin/bash
# Find publishAudioTrack implementation
rg -n 'publishAudioTrack' --type=dart -B 2 -A 20 packages/stream_video/lib/src/webrtc/rtc_manager.dart | head -100Repository: GetStream/stream-video-flutter
Length of output: 3888
🏁 Script executed:
#!/bin/bash
# Check line 361 context to understand the null track scenario
sed -n '355,370p' packages/stream_video/lib/src/webrtc/rtc_manager.dartRepository: GetStream/stream-video-flutter
Length of output: 573
🏁 Script executed:
#!/bin/bash
# Check the full publishVideoTrack implementation
sed -n '780,860p' packages/stream_video/lib/src/webrtc/rtc_manager.dartRepository: GetStream/stream-video-flutter
Length of output: 2621
_updateTransceiver: assigning uncloned track to sender after first update.
The initial transceiver created via _addTransceiver uses a cloned track (line 918). However, when _updateTransceiver is called from publishAudioTrack or publishVideoTrack with a cached transceiver, the condition at line 985 compares the previously-assigned clone to the original track, finds them different, stops the clone, and then calls replaceTrack(track?.mediaTrack) to assign the original track to the sender (line 989). This means after the first update, the transceiver references the uncloned original track instead of a clone, violating the design principle stated at line 916-917 ("create a clone of the track as otherwise the same trackId will appear in the SDP in multiple transceivers"). If multiple transceivers exist for the same track type (e.g., different audio bitrate profiles), they would share the same original track across senders.
The fix should either clone the track in _updateTransceiver before replacing, or pass a clone when calling _updateTransceiver from the publish methods.
| }) { | ||
| return appleConfiguration ?? | ||
| basePolicy.getAppleConfiguration(defaultToSpeaker: defaultToSpeaker); | ||
| } |
There was a problem hiding this comment.
Shouldn't appleConfiguration be a callback so we can forward defaultToSpeaker like this:
return appleConfigurationGetter?.call(defaultToSpeaker: defaultToSpeaker) ??
basePolicy.getAppleConfiguration(defaultToSpeaker: defaultToSpeaker);I also wonder if it isn't easier to make CustomAudioPolicy abstract so devs can extend it. Or are you affraid for breaking changes here?
resolves FLU-168
Summary by CodeRabbit
New Features
Deprecated Features
Chores