Skip to content

Comments

feat(llc): hifi stereo playout (Android and iOS)#1173

Open
Brazol wants to merge 10 commits intomainfrom
feat/hifi-stereo-playout
Open

feat(llc): hifi stereo playout (Android and iOS)#1173
Brazol wants to merge 10 commits intomainfrom
feat/hifi-stereo-playout

Conversation

@Brazol
Copy link
Contributor

@Brazol Brazol commented Feb 6, 2026

resolves FLU-168

Summary by CodeRabbit

  • New Features

    • High‑fidelity audio support with selectable profiles (voice standard, voice high quality, music high quality)
    • Per‑track audio bitrate profiles and publish options for finer audio quality control
    • Audio configuration policies (broadcaster, viewer, hiFi, custom) and runtime reinitialization before calls
    • Improved stereo playout/capture handling and SDP mirroring to preserve stereo settings
  • Deprecated Features

    • Legacy Android audio configuration replaced by the new audio configuration policy
  • Chores

    • Updated stream_webrtc_flutter dependency to v2.2.5

@coderabbitai
Copy link

coderabbitai bot commented Feb 6, 2026

📝 Walkthrough

Walkthrough

Adds HiFi audio support (new SfuAudioBitrate/SfuAudioBitrateProfile, audio bitrate profiles in publish options, and RtcTrackPublishOptions), an AudioConfigurationPolicy framework with runtime reinitialization, SDP stereo mirroring, SDP type refactor, multiple RTC/manager API changes, and bumps stream_webrtc_flutter to ^2.2.5.

Changes

Cohort / File(s) Summary
Audio bitrate models & publish options
packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart, packages/stream_video/lib/src/sfu/data/models/sfu_publish_options.dart, packages/stream_video/lib/src/sfu/sfu_extensions.dart, packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart, packages/stream_video/lib/src/sfu/data/models/sfu_model_mapper_extensions.dart
New SfuAudioBitrate and SfuAudioBitrateProfile enum (voiceStandard, voiceHighQuality, musicHighQuality). Added audioBitrateProfiles to SfuPublishOptions, JSON/DTO mappers and extensions to map/serialize audio bitrate profiles.
Audio bitrate presets & track options
packages/stream_video/lib/src/webrtc/model/rtc_audio_bitrate_preset.dart, packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart, packages/stream_video/lib/src/webrtc/codecs_helper.dart
Replaced old bitrate constants with new grouped presets (voiceStandard/voiceHighQuality/musicStandard/musicHighQuality). Added RtcTrackPublishOptions with audioBitrateProfile and helper findOptimalAudioLayers for encoding selection.
Audio configuration policy & notifier
packages/stream_video/lib/src/models/audio_configuration_policy.dart, packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart
New sealed AudioConfigurationPolicy with broadcaster, viewer, hiFi, custom variants and platform config getters. Added RtcMediaDeviceNotifier.reinitializeAudioConfiguration(AudioConfigurationPolicy) and audio processing state listener.
Call state, actions, call API
packages/stream_video/lib/src/call_state.dart, packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart, packages/stream_video/lib/src/call/call.dart
Added CallState.audioBitrateProfile with default; StateCallActionsMixin.setAudioBitrateProfile(); Call.setAudioBitrateProfile() to apply HiFi constraints, update processing and channel count. Removed inline Android audio config calls from setMicrophoneEnabled.
RTC manager, factory, transceivers & tracks
packages/stream_video/lib/src/webrtc/rtc_manager.dart, packages/stream_video/lib/src/webrtc/rtc_manager_factory.dart, packages/stream_video/lib/src/webrtc/transceiver_cache.dart, packages/stream_video/lib/src/webrtc/media/audio_constraints.dart, packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart
RtcManager now requires stateManager and streamVideo, stores _defaultAudioConstraints, supports changeDefaultAudioConstraints and policy-based Apple config. Transceiver caching now carries RtcTrackPublishOptions; TransceiverManager.get returns TransceiverCache?. AudioConstraints gains channelCount. RtcTrackInfo adds dtx, stereo, red.
Call session & startup changes
packages/stream_video/lib/src/call/call_session.dart, packages/stream_video/lib/src/coordinator/open_api/open_api_extensions.dart
Removed private _ensureAndroidAudioConfiguration calls. When creating RTC managers, pass streamVideo, stateManager, callSessionConfig, publishOptions. OpenAPI conversion now includes hifiAudioEnabled in audio settings.
StreamVideo options & exports
packages/stream_video/lib/src/stream_video.dart, packages/stream_video/lib/stream_video.dart, packages/stream_video/CHANGELOG.md
Added StreamVideoOptions.constant() const constructor, AudioConfigurationPolicy audioConfigurationPolicy field and deprecated androidAudioConfiguration. Initialization now calls RtcMediaDeviceNotifier.reinitializeAudioConfiguration(...). Exported audio_configuration_policy.dart and sfu_audio_bitrate.dart.
SDP refactor & editing, stereo mirroring
packages/stream_video/lib/src/webrtc/sdp/sdp.dart, packages/stream_video/lib/src/webrtc/sdp/editor/sdp_editor_impl.dart, packages/stream_video/lib/src/webrtc/sdp/editor/action/sdp_edit_action_factory.dart, packages/stream_video/lib/src/webrtc/sdp/editor/rule/sdp_munging_rule.dart, packages/stream_video/lib/src/webrtc/sdp/editor/rule/rule_mirror_sprop_stereo.dart, packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart
Refactored Sdp into typed subclasses (LocalOffer/LocalAnswer/RemoteOffer/RemoteAnswer). SdpEditor rules now accept optional Sdp context; added MirrorSpropStereoRule and MirrorSpropStereoAction to mirror sprop-stereo from offer to answer.
PeerConnection & traced wrapper
packages/stream_video/lib/src/webrtc/peer_connection.dart, packages/stream_video/lib/src/webrtc/traced_peer_connection.dart
createAnswer signature changed to require offerSdp parameter; SDP editing forwards offerSdp for local-answer processing.
SDP action factory & editor toggles
packages/stream_video/lib/src/webrtc/sdp/editor/sdp_editor_impl.dart, packages/stream_video/lib/src/webrtc/sdp/editor/action/sdp_edit_action_factory.dart
SdpEditActionFactory.create gained optional sdp param; editor enables a new mirrorSpropStereo rule for local answers and adjusts default rule toggles.
Tests & examples & pubspecs
packages/stream_video/test/..., packages/stream_video_flutter/example/..., packages/*/pubspec.yaml, melos.yaml
Updated tests/examples to use StreamVideoOptions.constant() and adjusted const/static usage. Bumped stream_webrtc_flutter to ^2.2.5 across packages. Removed stream_video: ^1.2.4 from melos.yaml.

Sequence Diagram(s)

sequenceDiagram
    participant App as Application
    participant SV as StreamVideo
    participant RTCMD as RtcMediaDeviceNotifier
    participant RTCM as RtcManager
    participant WC as WebRTC/PeerConnection

    App->>SV: Initialize(apiKey, options with audioConfigurationPolicy)
    SV->>RTCMD: reinitializeAudioConfiguration(policy)
    RTCMD->>RTCMD: Apply bypassVoiceProcessing<br/>and platform configs
    RTCMD->>RTCMD: Set stereo playout (iOS) / Android config
    RTCMD->>RTCMD: Listen for audio processing state changes

    App->>SV: Call.setAudioBitrateProfile(profile)
    SV->>RTCM: Update CallState.audioBitrateProfile
    RTCM->>RTCM: changeDefaultAudioConstraints(channelCount, NS/EC/AGC)
    RTCM->>WC: Create/update audio track with new constraints
    WC->>WC: Apply encodings / senders update
Loading
sequenceDiagram
    participant Local as Local Peer
    participant LocalEditor as SdpEditor(Local)
    participant Remote as Remote Peer
    participant RemoteEditor as SdpEditor(Remote)
    participant WC as WebRTC

    Local->>WC: Create offer
    WC->>LocalEditor: Edit offer SDP
    Local->>Remote: Send offer SDP
    Remote->>WC: Create answer
    WC->>RemoteEditor: Edit answer SDP (offer context)
    RemoteEditor->>RemoteEditor: Mirror sprop-stereo from offer → answer
    Remote->>Local: Send answer SDP with mirrored stereo
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 I hopped through code to tune the sound,
Profiles set, stereo all around,
Policies chosen, constraints aligned,
HiFi hops and echoes kind,
A tiny rabbit sings—new audio found!

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is minimal, containing only a single issue reference without implementation details, goal explanation, or testing information required by the template. Expand the description to include: Goal (why HIFI stereo support matters), Implementation details (audio configuration policies, bitrate profiles), Testing instructions, and verification that the Contributor Checklist items are addressed.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(llc): hifi stereo playout (Android and iOS)' clearly and concisely describes the main feature added: HIFI stereo audio support across platforms.
Linked Issues check ✅ Passed All coding objectives from FLU-168 are addressed: audio bitrate profiles (voiceStandard, voiceHighQuality, musicHighQuality), audio configuration policies (broadcaster, viewer, hiFi, custom), stereo playout support for iOS/Android, and runtime configuration via reinitializeAudioConfiguration().
Out of Scope Changes check ✅ Passed All changes are directly aligned with HIFI stereo audio implementation objectives; dependency updates (stream_webrtc_flutter) support the feature requirements without introducing unrelated modifications.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/hifi-stereo-playout

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Brazol Brazol changed the title feat: hifi stereo playout (Android) feat(llc): hifi stereo playout (Android) Feb 6, 2026
@codecov
Copy link

codecov bot commented Feb 6, 2026

Codecov Report

❌ Patch coverage is 3.83387% with 301 lines in your changes missing coverage. Please review.
✅ Project coverage is 6.47%. Comparing base (d3ad830) to head (da561df).

Files with missing lines Patch % Lines
...kages/stream_video/lib/src/webrtc/rtc_manager.dart 0.00% 85 Missing ⚠️
packages/stream_video/lib/src/stream_video.dart 8.16% 45 Missing ⚠️
.../editor/action/action_set_opus_stereo_enabled.dart 0.00% 28 Missing ⚠️
...deo/lib/src/models/audio_configuration_policy.dart 8.33% 22 Missing ⚠️
...tc/rtc_media_device/rtc_media_device_notifier.dart 13.04% 20 Missing ⚠️
packages/stream_video/lib/src/call/call.dart 0.00% 14 Missing ⚠️
...deo/lib/src/sfu/data/models/sfu_audio_bitrate.dart 0.00% 12 Missing ⚠️
...c/sfu/data/models/sfu_model_mapper_extensions.dart 0.00% 12 Missing ⚠️
...c/sfu/data/events/sfu_event_mapper_extensions.dart 0.00% 11 Missing ⚠️
...tream_video/lib/src/call/session/call_session.dart 0.00% 9 Missing ⚠️
... and 13 more
Additional details and impacted files
@@           Coverage Diff            @@
##            main   #1173      +/-   ##
========================================
- Coverage   6.47%   6.47%   -0.01%     
========================================
  Files        601     606       +5     
  Lines      42031   42215     +184     
========================================
+ Hits        2723    2732       +9     
- Misses     39308   39483     +175     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Brazol Brazol changed the title feat(llc): hifi stereo playout (Android) feat(llc): hifi stereo playout (Android and iOS) Feb 10, 2026
@Brazol Brazol marked this pull request as ready for review February 20, 2026 12:08
@Brazol Brazol requested a review from a team as a code owner February 20, 2026 12:08
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
packages/stream_video/lib/src/sfu/sfu_extensions.dart (1)

317-325: ⚠️ Potential issue | 🟡 Minor

Remove the redundant duplicate useSingleLayer key

The map now contains both 'useSingleLayer' (camelCase, line 318, pre-existing) and 'use_single_layer' (snake_case, line 321, newly added) for the same field. In Dart maps, these are two distinct keys, so both will appear in the output. The rest of this map uses camelCase, so the new snake_case entry is inconsistent. The camelCase key should be removed, or vice-versa.

🐛 Proposed fix
-      'useSingleLayer': useSingleLayer,
       'bitrate': bitrate,
       'fps': fps,
-      'use_single_layer': useSingleLayer,
+      'useSingleLayer': useSingleLayer,
       'audio_bitrate_profiles': audioBitrateProfiles
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/sfu/sfu_extensions.dart` around lines 317 -
325, The map serialization in toJson contains a duplicate field for the same
property: it includes both 'useSingleLayer' and the inconsistent snake_case
'use_single_layer' entries referencing useSingleLayer; remove the redundant
snake_case entry (i.e. delete the "'use_single_layer': useSingleLayer" map
entry) so the map consistently uses camelCase like the other keys (see
useSingleLayer, maxTemporalLayers, audioBitrateProfiles.toJson).
🧹 Nitpick comments (19)
packages/stream_video_flutter/example/lib/stream_video_options.dart (1)

6-15: Consider caching remote and local as static final fields for consistency.

The remote and local getters now allocate a fresh StreamVideoOptions instance on every access. The three platform-specific options (_web, _android, _ios) are already static final fields and are therefore allocated only once. Making remote and local consistent with that pattern avoids the unnecessary per-call allocation.

♻️ Proposed refactor
 mixin DefaultVideoOptions {
-  static StreamVideoOptions get remote {
-    return StreamVideoOptions();
-  }
-
-  static StreamVideoOptions get local {
-    return StreamVideoOptions(
-      coordinatorRpcUrl: 'http://10.0.0.53:3030/video',
-      coordinatorWsUrl: 'ws://10.0.0.53:8800/video/connect',
-    );
-  }
+  static final StreamVideoOptions remote = StreamVideoOptions();
+
+  static final StreamVideoOptions local = StreamVideoOptions(
+    coordinatorRpcUrl: 'http://10.0.0.53:3030/video',
+    coordinatorWsUrl: 'ws://10.0.0.53:8800/video/connect',
+  );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video_flutter/example/lib/stream_video_options.dart` around
lines 6 - 15, The remote and local getters (StreamVideoOptions.get remote and
StreamVideoOptions.get local) currently create a new StreamVideoOptions on every
access; change them to static final fields (e.g., static final
StreamVideoOptions remote = StreamVideoOptions(...); and static final
StreamVideoOptions local = StreamVideoOptions(...);) so the instances are
allocated once like the existing _web, _android, and _ios fields; update
references to use those fields and remove the getter implementations.
packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart (1)

27-29: New fields excluded from value equality — consider adding ==/hashCode.

RtcTrackInfo has no ==/hashCode override, so dtx, stereo, and red (along with all existing fields) will never participate in equality comparisons. If any part of the codebase compares RtcTrackInfo instances by value (e.g., in change-detection or deduplication logic), the new fields will be silently ignored.

♻️ Proposed addition of `==` and `hashCode`
   final bool dtx;
   final bool stereo;
   final bool red;
+
+  `@override`
+  bool operator ==(Object other) =>
+      identical(this, other) ||
+      other is RtcTrackInfo &&
+          runtimeType == other.runtimeType &&
+          trackId == other.trackId &&
+          trackType == other.trackType &&
+          mid == other.mid &&
+          muted == other.muted &&
+          publishOptionId == other.publishOptionId &&
+          dtx == other.dtx &&
+          stereo == other.stereo &&
+          red == other.red;
+
+  `@override`
+  int get hashCode => Object.hash(
+        trackId, trackType, mid, muted, publishOptionId, dtx, stereo, red);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart` around lines
27 - 29, RtcTrackInfo’s new boolean fields (dtx, stereo, red) aren’t included in
value equality because the class lacks operator== and hashCode; update
RtcTrackInfo to override operator== and hashCode to include all existing fields
plus dtx, stereo, and red (or use Equatable if preferred) so
comparisons/deduplication account for the new properties, referencing the
RtcTrackInfo class and its field names when making the change.
packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart (1)

66-72: Use a named parameter to match the mixin's established convention.

Every other method in this mixin accepts named (required) parameters — setCallLive({required bool isLive}), setCallRecording({required bool isRecording}), setCallAudioProcessing({required bool isAudioProcessing}), etc. setAudioBitrateProfile is the only one that takes a positional argument, making call sites less self-documenting and inconsistent with the rest of the API surface.

♻️ Proposed fix
-  void setAudioBitrateProfile(SfuAudioBitrateProfile profile) {
-    _logger.v(() => '[setAudioBitrateProfile] profile:$profile');
-
-    state = state.copyWith(
-      audioBitrateProfile: profile,
-    );
-  }
+  void setAudioBitrateProfile({required SfuAudioBitrateProfile profile}) {
+    _logger.v(() => '[setAudioBitrateProfile] profile:$profile');
+
+    state = state.copyWith(
+      audioBitrateProfile: profile,
+    );
+  }

Update the internal call site in packages/stream_video/lib/src/call/call.dart:3159:

-    _stateManager.setAudioBitrateProfile(profile);
+    _stateManager.setAudioBitrateProfile(profile: profile);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart`
around lines 66 - 72, The method setAudioBitrateProfile currently takes a
positional parameter which breaks the mixin's convention of using named required
parameters; change its signature to setAudioBitrateProfile({required
SfuAudioBitrateProfile profile}) and update its internal usages (e.g., the call
site in Call where setAudioBitrateProfile(...) is invoked) to pass profile by
name; keep the logger and state.copyWith(audioBitrateProfile: profile) behavior
unchanged and ensure imports/types remain valid for SfuAudioBitrateProfile.
packages/stream_video/test/src/core/client_state_test.dart (2)

279-281: Remove the commented-out Future.delayed block.

This debug artifact adds noise to the test. If the delay is genuinely not needed, the lines should be deleted outright.

🧹 Proposed cleanup
-          // await Future.delayed(
-          //     const Duration(milliseconds: 10)); // Allow stream to emit
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/test/src/core/client_state_test.dart` around lines 279
- 281, Remove the commented-out debug artifact: delete the two commented lines
containing "await Future.delayed(const Duration(milliseconds: 10)); // Allow
stream to emit" from the test (client_state_test.dart) so the test no longer
contains the unused delay; ensure no other commented delay remnants remain in
the same test function.

62-65: Use const StreamVideoOptions.constant(...) for consistency with other test files in this PR.

The PR introduces a StreamVideoOptions.constant(...) named constructor and migrates other test files (e.g., call_allow_multiple_active_calls_test.dart, call_apply_settings_test.dart) to use const StreamVideoOptions.constant(...). These two setUp blocks in client_state_test.dart use the non-const StreamVideoOptions(...) constructor instead, creating an inconsistency.

♻️ Proposed update for both setUp blocks
-          options: StreamVideoOptions(
+          options: const StreamVideoOptions.constant(
             allowMultipleActiveCalls: false,
             autoConnect: false,
           ),
-          options: StreamVideoOptions(
+          options: const StreamVideoOptions.constant(
             allowMultipleActiveCalls: true,
             autoConnect: false,
           ),

Also applies to: 183-186

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/test/src/core/client_state_test.dart` around lines 62 -
65, Two setUp blocks in client_state_test.dart use the non-const constructor
StreamVideoOptions(...) which is inconsistent with other tests; replace those
calls with the new const named constructor StreamVideoOptions.constant(...) in
both occurrences (the setUp blocks that currently pass allowMultipleActiveCalls:
false, autoConnect: false) so the options are compile-time constants and match
other tests; update the two places referencing StreamVideoOptions to use
StreamVideoOptions.constant(...) with the same parameters.
packages/stream_video/test/src/call/fixtures/call_test_helpers.dart (1)

169-169: Consider using StreamVideoOptions.constant() for consistency with the broader test suite.

Other test files (like call_apply_settings_test.dart and call_allow_multiple_active_calls_test.dart) use const StreamVideoOptions.constant(). Since this is a mock stub that simply returns a value, the choice between the plain constructor and the constant constructor doesn't affect test correctness, but aligning with the established pattern would improve consistency across the test suite.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/test/src/call/fixtures/call_test_helpers.dart` at line
169, The test stub returns a non-const StreamVideoOptions instance; update the
mock setup that calls when(() => streamVideo.options).thenReturn(...) to return
the constant variant instead for consistency with other tests—use
StreamVideoOptions.constant() (as a const) when supplying the mocked value so
the mocked streamVideo.options matches the suite convention.
packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart (2)

41-82: Consider extracting the shared audio-section parsing logic.

_findSpropStereoMids and _addStereoToMatchingMids both independently walk the SDP to detect audio m= sections, extract a=mid:, and resolve the Opus payload type via rtpmap. Extracting this traversal into a shared helper (e.g., an iterator or visitor that yields per-section context) would reduce the duplication and make it easier to maintain if the parsing needs evolve.

Also applies to: 84-126

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart`
around lines 41 - 82, Both _findSpropStereoMids and _addStereoToMatchingMids
duplicate the SDP audio-section traversal (detecting "m=audio", reading a=mid,
resolving Opus payload via rtpmap and fmtp); extract that traversal into a
shared helper (e.g., iterateAudioSections or parseAudioSections) that yields a
small context object or stream per audio section containing currentMid, resolved
opusPayloadType, and section lines/attributes so both functions can reuse it;
update _findSpropStereoMids and _addStereoToMatchingMids to call this helper and
perform their specific checks (sprop-stereo detection or adding stereo fmtp)
using the provided context, keeping existing symbols (rtpmapParser, fmtpParser,
AudioCodec.opus) intact.

41-56: Ordering assumption: a=mid: appears before a=fmtp: in the offer.

The method requires currentMid to be non-null (line 75) when it encounters the fmtp line with sprop-stereo=1. If a conforming SDP ever places a=fmtp: before a=mid:, the mid would be missed. This is safe in practice with current WebRTC implementations, but worth a brief inline comment documenting the assumption.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart`
around lines 41 - 56, The function _findSpropStereoMids assumes 'a=mid:' appears
before the corresponding 'a=fmtp:' so currentMid is set when encountering fmtp
lines; add a brief inline comment inside _findSpropStereoMids (near the
variables currentMid and opusPayloadType initialization or before parsing fmtp
lines) stating this ordering assumption and its limitation (i.e., if a=fmtp
appears before a=mid the MID may be missed), so future readers/maintainers are
aware of the constraint.
packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart (1)

27-27: Prefer Object.hash over XOR for hashCode

The XOR approach gives poor distribution: (a ^ b) == (b ^ a) makes (profile, bitrate) and (bitrate-as-profile, profile-as-bitrate) collide, and x ^ x == 0 means any two equal profile.hashCode values collapse to the same slot. With a nullable bitrate, every instance with the same profile and null bitrate hashes identically regardless of other state. Object.hash is the idiomatic Dart 2.14+ replacement.

♻️ Proposed fix
   `@override`
-  int get hashCode => profile.hashCode ^ bitrate.hashCode;
+  int get hashCode => Object.hash(profile, bitrate);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart` at line
27, Replace the XOR-based hash in the hashCode getter with Dart's Object.hash to
improve distribution: update the getter for hashCode in the class that uses
profile and bitrate to return Object.hash(profile, bitrate) instead of
profile.hashCode ^ bitrate.hashCode; this handles nullable bitrate correctly and
removes the collision issues from XOR.
packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart (2)

120-149: StreamSubscription from _listenForAudioProcessingStateChanges is not stored

The subscription returned by rtc.eventStream.listen(...) is discarded. While the singleton lifecycle makes this acceptable in practice, the lack of an onError handler means any stream error will silently terminate this listener — future onAudioProcessingStateChanged events would go unobserved with no log or recovery path.

♻️ Suggested fix
+  StreamSubscription<Map<String, dynamic>>? _audioProcessingSubscription;

   void _listenForAudioProcessingStateChanges() {
-    rtc.eventStream.listen((data) {
+    _audioProcessingSubscription = rtc.eventStream.listen((data) {
       if (data.isEmpty) return;
       // ...
+    }, onError: (Object e, StackTrace st) {
+      _tracer.trace('audioProcessingStateChangedError', {'error': e.toString()});
     });
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart`
around lines 120 - 149, In _listenForAudioProcessingStateChanges, the
StreamSubscription returned by rtc.eventStream.listen is dropped and has no
onError handler; capture it (introduce a field like _audioProcessingStateSub),
assign the subscription returned by rtc.eventStream.listen(..., onError: (e, st)
{ _tracer.trace('audioProcessingStateError', {'error': e.toString(), 'stack':
st.toString()}); }) and add proper cancellation in the notifier's dispose/close
method (call _audioProcessingStateSub?.cancel()) so errors are logged and the
subscription can be cleaned up.

255-275: The setiOSStereoPlayoutPreferred method exists in stream_webrtc_flutter ^2.2.5 — no compilation error.

However, the direct coupling of stereo playout to policy.bypassVoiceProcessing is architecturally inflexible. A future policy that bypasses voice processing without wanting stereo output would incorrectly enable it. Consider decoupling the stereo preference from the bypass flag by adding a dedicated stereoPlayoutPreferred field to AudioConfigurationPolicy.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart`
around lines 255 - 275, The reinitializeAudioConfiguration currently ties iOS
stereo preference to policy.bypassVoiceProcessing which is inflexible; add a new
boolean field (e.g., stereoPlayoutPreferred) to AudioConfigurationPolicy and use
that instead of policy.bypassVoiceProcessing in
rtc.Helper.setiOSStereoPlayoutPreferred; update AudioConfigurationPolicy
constructors/defaults and any callers to supply the new field, and ensure
getAndroidConfiguration()/toMap() remain unchanged so Android behavior is
preserved while iOS stereo is controlled explicitly via the new property.
packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart (1)

3-9: RtcTrackPublishOptions is missing == / hashCode / toString()

All fields are final but == relies on object identity, so two RtcTrackPublishOptions with identical audioBitrateProfile will not compare equal. If the transceiver or cache layer ever checks options equality (e.g., to avoid redundant reinit), this will silently fail.

♻️ Suggested additions
+import 'package:meta/meta.dart';
+
+@immutable
 class RtcTrackPublishOptions {
   const RtcTrackPublishOptions({
     this.audioBitrateProfile = SfuAudioBitrateProfile.voiceStandard,
   });

   final SfuAudioBitrateProfile audioBitrateProfile;
+
+  `@override`
+  bool operator ==(Object other) =>
+      identical(this, other) ||
+      other is RtcTrackPublishOptions &&
+          runtimeType == other.runtimeType &&
+          audioBitrateProfile == other.audioBitrateProfile;
+
+  `@override`
+  int get hashCode => audioBitrateProfile.hashCode;
+
+  `@override`
+  String toString() =>
+      'RtcTrackPublishOptions{audioBitrateProfile: $audioBitrateProfile}';
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart`
around lines 3 - 9, RtcTrackPublishOptions currently uses identity equality, so
two instances with the same audioBitrateProfile won't compare equal; implement
value equality by overriding operator== and hashCode (and add a helpful
toString()) in the RtcTrackPublishOptions class so comparisons and caching work
correctly. Specifically, in class RtcTrackPublishOptions override bool
operator==(Object other) to check runtimeType and compare audioBitrateProfile,
compute hashCode from audioBitrateProfile, and implement toString() to include
audioBitrateProfile; alternatively you may implement Equatable or a data class
but ensure operator==, hashCode and toString cover the audioBitrateProfile
field.
packages/stream_video/lib/src/call_state.dart (1)

327-337: audioBitrateProfile missing from toString()

The new field is included in props (equality/hashCode) but omitted from toString(), reducing debuggability.

🔧 Suggested fix
  `@override`
  String toString() {
    return 'CallState(status: $status, currentUserId: $currentUserId,'
        ' callCid: $callCid, createdByUser: $createdByUser,'
        ' sessionId: $sessionId, isRecording: $isRecording,'
+       ' audioBitrateProfile: $audioBitrateProfile,'
        ' settings: $settings, egress: $egress, '
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/call_state.dart` around lines 327 - 337,
CallState.toString() is missing the audioBitrateProfile field (even though it’s
included in props), so update the CallState.toString() method to include
audioBitrateProfile in the returned string alongside the other fields (e.g.,
append 'audioBitrateProfile: $audioBitrateProfile' in the same concatenation),
ensuring the representation matches the style/ordering of the existing fields
for consistent debuggability.
packages/stream_video/lib/src/models/audio_configuration_policy.dart (2)

170-184: defaultToSpeaker parameter is accepted but ignored in ViewerAudioPolicy and HiFiAudioPolicy.

In BroadcasterAudioPolicy, defaultToSpeaker conditionally controls the audio mode and category options. But in ViewerAudioPolicy (and HiFiAudioPolicy), the parameter is never referenced — the speaker is always defaulted via the const option set. Since getAppleConfiguration is defined on the sealed base class with this parameter, callers may expect it to have an effect. Consider documenting this directly on the method or in a code comment.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart` around
lines 170 - 184, ViewerAudioPolicy and HiFiAudioPolicy declare
getAppleConfiguration(bool defaultToSpeaker = false) but ignore the
defaultToSpeaker argument (unlike BroadcasterAudioPolicy), leading to surprising
behavior; update ViewerAudioPolicy.getAppleConfiguration and
HiFiAudioPolicy.getAppleConfiguration to respect the defaultToSpeaker flag
(toggle rtc.AppleAudioMode and include or omit
rtc.AppleAudioCategoryOption.defaultToSpeaker in appleAudioCategoryOptions
consistent with BroadcasterAudioPolicy), or if intentional, add a clear code
comment on those methods (and/or update the base sealed class doc) stating that
defaultToSpeaker is intentionally ignored for these policies so callers know the
behavior.

167-202: Identical platform configurations are intentional by design.

ViewerAudioPolicy and HiFiAudioPolicy are documented to have identical platform-level audio configurations (line 51: "Same audio settings as Viewer Policy"). The differentiation occurs at the Audio Device Module layer—the HiFi policy enables stereo input/output on Android and stereo playout on iOS, which is already documented in the class-level docstrings (lines 214-217) and requirements section (lines 219-222). No code change needed; the existing documentation sufficiently explains this design.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart` around
lines 167 - 202, The platform-level audio settings in ViewerAudioPolicy and
HiFiAudioPolicy are intentionally identical; confirm and, if missing or unclear,
update the class docstrings for ViewerAudioPolicy and HiFiAudioPolicy to
explicitly state that platform configurations match and the differences are
implemented at the Audio Device Module (ADM) layer (e.g., HiFi enables stereo
input/output on Android and stereo playout on iOS); locate the classes
ViewerAudioPolicy and HiFiAudioPolicy and add a concise sentence in their
docstrings and requirements section clarifying this design so reviewers won’t
expect differing platform configs.
packages/stream_video/lib/src/webrtc/codecs_helper.dart (1)

190-206: New findOptimalAudioLayers — looks correct, minor robustness note.

The fallback map covers all three SfuAudioBitrateProfile enum values, so maxBitrate is always non-null today. If the enum is ever extended without updating this map, maxBitrate would be null, resulting in an RTCRtpEncoding with no bitrate cap (unconstrained). Consider adding a default fallback.

Suggested defensive fallback
   final maxBitrate =
       profileConfig?.bitrate ??
       {
         SfuAudioBitrateProfile.voiceStandard: AudioBitrate.voiceStandard,
         SfuAudioBitrateProfile.voiceHighQuality: AudioBitrate.voiceHighQuality,
         SfuAudioBitrateProfile.musicHighQuality: AudioBitrate.musicHighQuality,
-      }[trackPublishOptions.audioBitrateProfile];
+      }[trackPublishOptions.audioBitrateProfile] ??
+      AudioBitrate.voiceStandard;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/webrtc/codecs_helper.dart` around lines 190 -
206, findOptimalAudioLayers currently computes maxBitrate from profileConfig or
a map keyed by SfuAudioBitrateProfile, but if the enum is extended the map
lookup could yield null leaving RTCRtpEncoding unconstrained; update
findOptimalAudioLayers to ensure maxBitrate is non-null by providing a defensive
default (e.g., fall back to AudioBitrate.voiceStandard or another agreed safe
constant) when both profileConfig?.bitrate and the map lookup are null so the
returned rtc.RTCRtpEncoding always has an explicit maxBitrate.
packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart (1)

342-370: Extension naming: SfuAudioBitrateExtension2 is unclear.

Both extensions share a similar prefix but operate on different types (AudioBitrateProfile vs AudioBitrate). Consider renaming for clarity, e.g., SfuAudioBitrateProfileExtension and SfuAudioBitrateExtension.

Proposed rename
-extension SfuAudioBitrateExtension on sfu_models.AudioBitrateProfile {
+extension SfuAudioBitrateProfileExtension on sfu_models.AudioBitrateProfile {
   SfuAudioBitrateProfile toDomain() {
     ...
   }
 }

-extension SfuAudioBitrateExtension2 on sfu_models.AudioBitrate {
+extension SfuAudioBitrateExtension on sfu_models.AudioBitrate {
   SfuAudioBitrate toDomain() {
     ...
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart`
around lines 342 - 370, Rename the unclear extension SfuAudioBitrateExtension2
to a clear, type-reflective name (e.g., SfuAudioBitrateExtension) so it matches
the pattern used by the profile extension and improves readability; update the
extension declaration for the sfu_models.AudioBitrate type (extension
SfuAudioBitrateExtension2 on sfu_models.AudioBitrate) to the new name and adjust
any usages/imports referencing SfuAudioBitrateExtension2 and the constructor
SfuAudioBitrate(toDomain) accordingly to the new extension name to avoid broken
references.
packages/stream_video/lib/src/call/call.dart (1)

3143-3178: Add error handling to unawaited() calls in setAudioBitrateProfile to avoid silently discarding errors.

startAudioProcessing() and stopAudioProcessing() both return Future<Result<None>> and can fail, but are called via unawaited() without error handling at lines 3153–3155. This silently discards any error information. Add .catchError() handlers to log failures, consistent with the pattern used elsewhere in this file (see lines 1325 and 1861–1862).

Similarly, changeDefaultAudioConstraints() returns Future<void> but is not awaited at line 3168, though the null-coalescing operators mean it will silently no-op if the session is not connected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/call/call.dart` around lines 3143 - 3178, The
unawaited calls in setAudioBitrateProfile are discarding failures: wrap the
futures returned by startAudioProcessing() and stopAudioProcessing() with
.catchError(...) to log any Result.error using the same logger pattern used
elsewhere (so unawaited(startAudioProcessing().catchError(...)) and
unawaited(stopAudioProcessing().catchError(...))) and include contextual
messages referencing setAudioBitrateProfile; likewise, handle the unawaited
rtc.Helper.setiOSStereoPlayoutPreferred(stereo) and the
_session?.rtcManager?.changeDefaultAudioConstraints(...) call by appending
.catchError(...) to log any thrown errors (or, if changeDefaultAudioConstraints
returns a Result-like Future, inspect and log errors) so no async failure is
silently dropped.
packages/stream_video/lib/src/webrtc/rtc_manager.dart (1)

212-225: changeDefaultAudioConstraints mutes all audio tracks then unmutes — potential issue with stale references.

The localAudioTracks collection is captured from tracks.values before the mute loop. After muteTrack updates tracks[trackId] (line 1050), the second loop still iterates the original (now-stale) track objects to get trackId. This works because trackId is immutable and unmuteTrack re-fetches from tracks, but the lack of error handling here means a failure in any mute/unmute will propagate unhandled to the caller. Consider wrapping in try/catch or at least documenting that callers must handle errors.

Also, there's no guard against concurrent calls to this method — if called twice rapidly, the interleaved mute/unmute sequences could produce unexpected state.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 212 -
225, changeDefaultAudioConstraints currently captures localAudioTracks once and
runs mute then unmute loops without error handling or concurrency protection;
update it to (1) serialize operations per-track (iterate
tracks.values.whereType<RtcLocalAudioTrack>() and for each track perform
muteTrack(...) then unmuteTrack(...) so you always resolve current state via
mute/unmute which re-reads tracks), (2) wrap each mute/unmute call in try/catch
to handle/log errors (or collect and rethrow an aggregated error) instead of
letting a single failure bubble uncaught, and (3) guard against concurrent
invocations by adding a short-lived lock/flag (e.g. a private boolean
_changingAudioConstraints or a mutex) checked/set at method entry and cleared on
exit to prevent interleaved calls; reference changeDefaultAudioConstraints,
muteTrack, unmuteTrack, tracks, and RtcLocalAudioTrack when making these
changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@packages/stream_video/CHANGELOG.md`:
- Line 8: Update the CHANGELOG line that currently says "(Android only)" to
reflect that the PR adds stereo support for both Android and iOS or clarify the
Android-only parts: edit the sentence referencing `audioConfigurationPolicy` in
`StreamVideoOptions` to mention iOS support too (or explicitly state that only
stereo ADM capture remains Android-only), and ensure `AudioConfigurationPolicy`
and Apple-specific settings like `bypassVoiceProcessing` are noted as
iOS-relevant where appropriate so the changelog matches the implementation.
- Line 15: The CHANGELOG entry containing the markdown line "See the
[documentation](https://getstream.io/video/docs/flutter/guides/camera-and-microphone/high-fidelity-audio/)
for details." points to a 404; remove that link or replace the sentence with a
neutral placeholder such as "Documentation coming soon" (or remove the sentence
entirely) so the CHANGELOG does not reference an unpublished page; update the
line in CHANGELOG.md accordingly.

In `@packages/stream_video/lib/src/stream_video.dart`:
- Around line 192-201: The call to
RtcMediaDeviceNotifier.instance.reinitializeAudioConfiguration can throw so the
.then branch may never run and webrtcInitializationCompleter remains unresolved;
update the block around reinitializeAudioConfiguration (the call in
stream_video.dart that references CurrentPlatform and
RtcMediaDeviceNotifier.instance) to handle errors by adding a .catchError or
wrapping the awaited call in try/catch, ensure that on error you call
webrtcInitializationCompleter.completeError(error) (or at minimum complete the
completer) and log the error, and keep the existing logic for
precacheGenericSdps/RtcManager.cacheGenericSdp inside the success path so SDP
caching only runs when initialization succeeds.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 737-738: The variable name `cashedTransceiver` is a typo and
should be `cachedTransceiver`; update all occurrences in publishAudioTrack and
publishVideoTrack to use `cachedTransceiver` instead of `cashedTransceiver`
(e.g., the assignment from transceiversManager.get(option)?.transceiver and any
subsequent uses), ensuring references inside those methods (and any related null
checks or logging) are renamed consistently.

---

Outside diff comments:
In `@packages/stream_video/lib/src/sfu/sfu_extensions.dart`:
- Around line 317-325: The map serialization in toJson contains a duplicate
field for the same property: it includes both 'useSingleLayer' and the
inconsistent snake_case 'use_single_layer' entries referencing useSingleLayer;
remove the redundant snake_case entry (i.e. delete the "'use_single_layer':
useSingleLayer" map entry) so the map consistently uses camelCase like the other
keys (see useSingleLayer, maxTemporalLayers, audioBitrateProfiles.toJson).

---

Nitpick comments:
In `@packages/stream_video_flutter/example/lib/stream_video_options.dart`:
- Around line 6-15: The remote and local getters (StreamVideoOptions.get remote
and StreamVideoOptions.get local) currently create a new StreamVideoOptions on
every access; change them to static final fields (e.g., static final
StreamVideoOptions remote = StreamVideoOptions(...); and static final
StreamVideoOptions local = StreamVideoOptions(...);) so the instances are
allocated once like the existing _web, _android, and _ios fields; update
references to use those fields and remove the getter implementations.

In `@packages/stream_video/lib/src/call_state.dart`:
- Around line 327-337: CallState.toString() is missing the audioBitrateProfile
field (even though it’s included in props), so update the CallState.toString()
method to include audioBitrateProfile in the returned string alongside the other
fields (e.g., append 'audioBitrateProfile: $audioBitrateProfile' in the same
concatenation), ensuring the representation matches the style/ordering of the
existing fields for consistent debuggability.

In `@packages/stream_video/lib/src/call/call.dart`:
- Around line 3143-3178: The unawaited calls in setAudioBitrateProfile are
discarding failures: wrap the futures returned by startAudioProcessing() and
stopAudioProcessing() with .catchError(...) to log any Result.error using the
same logger pattern used elsewhere (so
unawaited(startAudioProcessing().catchError(...)) and
unawaited(stopAudioProcessing().catchError(...))) and include contextual
messages referencing setAudioBitrateProfile; likewise, handle the unawaited
rtc.Helper.setiOSStereoPlayoutPreferred(stereo) and the
_session?.rtcManager?.changeDefaultAudioConstraints(...) call by appending
.catchError(...) to log any thrown errors (or, if changeDefaultAudioConstraints
returns a Result-like Future, inspect and log errors) so no async failure is
silently dropped.

In
`@packages/stream_video/lib/src/call/state/mixins/state_call_actions_mixin.dart`:
- Around line 66-72: The method setAudioBitrateProfile currently takes a
positional parameter which breaks the mixin's convention of using named required
parameters; change its signature to setAudioBitrateProfile({required
SfuAudioBitrateProfile profile}) and update its internal usages (e.g., the call
site in Call where setAudioBitrateProfile(...) is invoked) to pass profile by
name; keep the logger and state.copyWith(audioBitrateProfile: profile) behavior
unchanged and ensure imports/types remain valid for SfuAudioBitrateProfile.

In `@packages/stream_video/lib/src/models/audio_configuration_policy.dart`:
- Around line 170-184: ViewerAudioPolicy and HiFiAudioPolicy declare
getAppleConfiguration(bool defaultToSpeaker = false) but ignore the
defaultToSpeaker argument (unlike BroadcasterAudioPolicy), leading to surprising
behavior; update ViewerAudioPolicy.getAppleConfiguration and
HiFiAudioPolicy.getAppleConfiguration to respect the defaultToSpeaker flag
(toggle rtc.AppleAudioMode and include or omit
rtc.AppleAudioCategoryOption.defaultToSpeaker in appleAudioCategoryOptions
consistent with BroadcasterAudioPolicy), or if intentional, add a clear code
comment on those methods (and/or update the base sealed class doc) stating that
defaultToSpeaker is intentionally ignored for these policies so callers know the
behavior.
- Around line 167-202: The platform-level audio settings in ViewerAudioPolicy
and HiFiAudioPolicy are intentionally identical; confirm and, if missing or
unclear, update the class docstrings for ViewerAudioPolicy and HiFiAudioPolicy
to explicitly state that platform configurations match and the differences are
implemented at the Audio Device Module (ADM) layer (e.g., HiFi enables stereo
input/output on Android and stereo playout on iOS); locate the classes
ViewerAudioPolicy and HiFiAudioPolicy and add a concise sentence in their
docstrings and requirements section clarifying this design so reviewers won’t
expect differing platform configs.

In
`@packages/stream_video/lib/src/sfu/data/events/sfu_event_mapper_extensions.dart`:
- Around line 342-370: Rename the unclear extension SfuAudioBitrateExtension2 to
a clear, type-reflective name (e.g., SfuAudioBitrateExtension) so it matches the
pattern used by the profile extension and improves readability; update the
extension declaration for the sfu_models.AudioBitrate type (extension
SfuAudioBitrateExtension2 on sfu_models.AudioBitrate) to the new name and adjust
any usages/imports referencing SfuAudioBitrateExtension2 and the constructor
SfuAudioBitrate(toDomain) accordingly to the new extension name to avoid broken
references.

In `@packages/stream_video/lib/src/sfu/data/models/sfu_audio_bitrate.dart`:
- Line 27: Replace the XOR-based hash in the hashCode getter with Dart's
Object.hash to improve distribution: update the getter for hashCode in the class
that uses profile and bitrate to return Object.hash(profile, bitrate) instead of
profile.hashCode ^ bitrate.hashCode; this handles nullable bitrate correctly and
removes the collision issues from XOR.

In `@packages/stream_video/lib/src/webrtc/codecs_helper.dart`:
- Around line 190-206: findOptimalAudioLayers currently computes maxBitrate from
profileConfig or a map keyed by SfuAudioBitrateProfile, but if the enum is
extended the map lookup could yield null leaving RTCRtpEncoding unconstrained;
update findOptimalAudioLayers to ensure maxBitrate is non-null by providing a
defensive default (e.g., fall back to AudioBitrate.voiceStandard or another
agreed safe constant) when both profileConfig?.bitrate and the map lookup are
null so the returned rtc.RTCRtpEncoding always has an explicit maxBitrate.

In `@packages/stream_video/lib/src/webrtc/model/rtc_tracks_info.dart`:
- Around line 27-29: RtcTrackInfo’s new boolean fields (dtx, stereo, red) aren’t
included in value equality because the class lacks operator== and hashCode;
update RtcTrackInfo to override operator== and hashCode to include all existing
fields plus dtx, stereo, and red (or use Equatable if preferred) so
comparisons/deduplication account for the new properties, referencing the
RtcTrackInfo class and its field names when making the change.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 212-225: changeDefaultAudioConstraints currently captures
localAudioTracks once and runs mute then unmute loops without error handling or
concurrency protection; update it to (1) serialize operations per-track (iterate
tracks.values.whereType<RtcLocalAudioTrack>() and for each track perform
muteTrack(...) then unmuteTrack(...) so you always resolve current state via
mute/unmute which re-reads tracks), (2) wrap each mute/unmute call in try/catch
to handle/log errors (or collect and rethrow an aggregated error) instead of
letting a single failure bubble uncaught, and (3) guard against concurrent
invocations by adding a short-lived lock/flag (e.g. a private boolean
_changingAudioConstraints or a mutex) checked/set at method entry and cleared on
exit to prevent interleaved calls; reference changeDefaultAudioConstraints,
muteTrack, unmuteTrack, tracks, and RtcLocalAudioTrack when making these
changes.

In
`@packages/stream_video/lib/src/webrtc/rtc_media_device/rtc_media_device_notifier.dart`:
- Around line 120-149: In _listenForAudioProcessingStateChanges, the
StreamSubscription returned by rtc.eventStream.listen is dropped and has no
onError handler; capture it (introduce a field like _audioProcessingStateSub),
assign the subscription returned by rtc.eventStream.listen(..., onError: (e, st)
{ _tracer.trace('audioProcessingStateError', {'error': e.toString(), 'stack':
st.toString()}); }) and add proper cancellation in the notifier's dispose/close
method (call _audioProcessingStateSub?.cancel()) so errors are logged and the
subscription can be cleaned up.
- Around line 255-275: The reinitializeAudioConfiguration currently ties iOS
stereo preference to policy.bypassVoiceProcessing which is inflexible; add a new
boolean field (e.g., stereoPlayoutPreferred) to AudioConfigurationPolicy and use
that instead of policy.bypassVoiceProcessing in
rtc.Helper.setiOSStereoPlayoutPreferred; update AudioConfigurationPolicy
constructors/defaults and any callers to supply the new field, and ensure
getAndroidConfiguration()/toMap() remain unchanged so Android behavior is
preserved while iOS stereo is controlled explicitly via the new property.

In
`@packages/stream_video/lib/src/webrtc/rtc_track/rtc_track_publish_options.dart`:
- Around line 3-9: RtcTrackPublishOptions currently uses identity equality, so
two instances with the same audioBitrateProfile won't compare equal; implement
value equality by overriding operator== and hashCode (and add a helpful
toString()) in the RtcTrackPublishOptions class so comparisons and caching work
correctly. Specifically, in class RtcTrackPublishOptions override bool
operator==(Object other) to check runtimeType and compare audioBitrateProfile,
compute hashCode from audioBitrateProfile, and implement toString() to include
audioBitrateProfile; alternatively you may implement Equatable or a data class
but ensure operator==, hashCode and toString cover the audioBitrateProfile
field.

In
`@packages/stream_video/lib/src/webrtc/sdp/editor/action/action_mirror_sprop_stereo.dart`:
- Around line 41-82: Both _findSpropStereoMids and _addStereoToMatchingMids
duplicate the SDP audio-section traversal (detecting "m=audio", reading a=mid,
resolving Opus payload via rtpmap and fmtp); extract that traversal into a
shared helper (e.g., iterateAudioSections or parseAudioSections) that yields a
small context object or stream per audio section containing currentMid, resolved
opusPayloadType, and section lines/attributes so both functions can reuse it;
update _findSpropStereoMids and _addStereoToMatchingMids to call this helper and
perform their specific checks (sprop-stereo detection or adding stereo fmtp)
using the provided context, keeping existing symbols (rtpmapParser, fmtpParser,
AudioCodec.opus) intact.
- Around line 41-56: The function _findSpropStereoMids assumes 'a=mid:' appears
before the corresponding 'a=fmtp:' so currentMid is set when encountering fmtp
lines; add a brief inline comment inside _findSpropStereoMids (near the
variables currentMid and opusPayloadType initialization or before parsing fmtp
lines) stating this ordering assumption and its limitation (i.e., if a=fmtp
appears before a=mid the MID may be missed), so future readers/maintainers are
aware of the constraint.

In `@packages/stream_video/test/src/call/fixtures/call_test_helpers.dart`:
- Line 169: The test stub returns a non-const StreamVideoOptions instance;
update the mock setup that calls when(() => streamVideo.options).thenReturn(...)
to return the constant variant instead for consistency with other tests—use
StreamVideoOptions.constant() (as a const) when supplying the mocked value so
the mocked streamVideo.options matches the suite convention.

In `@packages/stream_video/test/src/core/client_state_test.dart`:
- Around line 279-281: Remove the commented-out debug artifact: delete the two
commented lines containing "await Future.delayed(const Duration(milliseconds:
10)); // Allow stream to emit" from the test (client_state_test.dart) so the
test no longer contains the unused delay; ensure no other commented delay
remnants remain in the same test function.
- Around line 62-65: Two setUp blocks in client_state_test.dart use the
non-const constructor StreamVideoOptions(...) which is inconsistent with other
tests; replace those calls with the new const named constructor
StreamVideoOptions.constant(...) in both occurrences (the setUp blocks that
currently pass allowMultipleActiveCalls: false, autoConnect: false) so the
options are compile-time constants and match other tests; update the two places
referencing StreamVideoOptions to use StreamVideoOptions.constant(...) with the
same parameters.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
packages/stream_video/lib/src/webrtc/rtc_manager.dart (2)

212-225: Materialize the lazy iterable before mutating tracks to avoid fragile iteration.

localAudioTracks is a lazy view over tracks.values. Both muteTrack and unmuteTrack replace entries in the tracks map during iteration. While this currently works because Dart's LinkedHashMap doesn't throw on value replacement (no structural change), re-iterating the same lazy iterable in the second loop yields the replaced objects rather than the originals—which happens to be correct here but is subtle and fragile. Calling .toList() makes the intent explicit and guards against future changes (e.g., if muteTrack ever adds/removes keys).

Proposed fix
   Future<void> changeDefaultAudioConstraints(
     AudioConstraints constraints,
   ) async {
     _defaultAudioConstraints = constraints;
 
-    final localAudioTracks = tracks.values.whereType<RtcLocalAudioTrack>();
+    final localAudioTracks =
+        tracks.values.whereType<RtcLocalAudioTrack>().toList();
     for (final track in localAudioTracks) {
       await muteTrack(trackId: track.trackId, stopTrackOnMute: true);
     }
 
     for (final track in localAudioTracks) {
       await unmuteTrack(trackId: track.trackId);
     }
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 212 -
225, changeDefaultAudioConstraints is iterating a lazy view over tracks.values
which gets mutated by muteTrack/unmuteTrack; materialize the iterable to a list
first to avoid subtle re-iteration of replaced entries. In
changeDefaultAudioConstraints, replace the lazy localAudioTracks assignment with
a concrete list (e.g., final localAudioTracks =
tracks.values.whereType<RtcLocalAudioTrack>().toList()) and then iterate that
list for both await muteTrack(trackId: ...) and await unmuteTrack(trackId: ...);
keep the update to _defaultAudioConstraints as-is.

998-1031: _updateAudioPublishOptions always updates trackPublishOptions even when unchanged.

Line 1029 unconditionally sets transceiverBundle.trackPublishOptions = options for every matching publish option, even when audioBitrateProfile hasn't changed. This is harmless but slightly misleading — it bypasses the change-detection guard at line 1009.

Move the assignment inside the change guard
       if (current.audioBitrateProfile != options.audioBitrateProfile) {
         // ... encoding update logic ...
+        transceiverBundle.trackPublishOptions = options;
       }
-
-      transceiverBundle.trackPublishOptions = options;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart` around lines 998 -
1031, _updateAudioPublishOptions currently assigns
transceiverBundle.trackPublishOptions = options for every matching publishOption
even when nothing changed; change it so the assignment only happens when the
audioBitrateProfile actually differs (i.e., move the
transceiverBundle.trackPublishOptions = options statement inside the existing if
(current.audioBitrateProfile != options.audioBitrateProfile) block), keeping the
existing encoder/parameter update logic (transceiver.sender.setParameters)
untouched and preserving behavior when encodings.isEmpty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@packages/stream_video/lib/src/stream_video.dart`:
- Around line 192-204: The onError handler for
RtcMediaDeviceNotifier.instance.reinitializeAudioConfiguration currently
swallows errors and simply completes webrtcInitializationCompleter; update the
handler to log the error using the existing logger (e.g., _logger.e or similar)
so initialization failures are observable, while still ensuring
webrtcInitializationCompleter.complete() is called; locate the call site in
stream_video.dart around reinitializeAudioConfiguration and modify the
.onError((error, stack) { ... }) block to log error and stack before completing
the completer.

In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 737-770: The code had a misspelled variable name
"cashedTransceiver" which has been corrected to "cachedTransceiver"; ensure all
occurrences are consistently renamed (including where
transceiversManager.get(option)?.transceiver is stored and used) inside
publishAudioTrack and the other similar block (the second occurrence around the
_addTransceiver/_updateTransceiver logic). Verify references to
cachedTransceiver, calls to transceiversManager.get, and logging lines (e.g.,
_logger.v(...) that prints cached transceiver) all use the corrected identifier
so there are no remaining typos or unresolved symbol references.

---

Nitpick comments:
In `@packages/stream_video/lib/src/webrtc/rtc_manager.dart`:
- Around line 212-225: changeDefaultAudioConstraints is iterating a lazy view
over tracks.values which gets mutated by muteTrack/unmuteTrack; materialize the
iterable to a list first to avoid subtle re-iteration of replaced entries. In
changeDefaultAudioConstraints, replace the lazy localAudioTracks assignment with
a concrete list (e.g., final localAudioTracks =
tracks.values.whereType<RtcLocalAudioTrack>().toList()) and then iterate that
list for both await muteTrack(trackId: ...) and await unmuteTrack(trackId: ...);
keep the update to _defaultAudioConstraints as-is.
- Around line 998-1031: _updateAudioPublishOptions currently assigns
transceiverBundle.trackPublishOptions = options for every matching publishOption
even when nothing changed; change it so the assignment only happens when the
audioBitrateProfile actually differs (i.e., move the
transceiverBundle.trackPublishOptions = options statement inside the existing if
(current.audioBitrateProfile != options.audioBitrateProfile) block), keeping the
existing encoder/parameter update logic (transceiver.sender.setParameters)
untouched and preserving behavior when encodings.isEmpty.

Comment on lines +975 to +996
Future<void> _updateTransceiver(
RTCRtpTransceiver transceiver,
RtcLocalTrack? track,
SfuTrackType trackType, {
RtcTrackPublishOptions? trackPublishOptions,
}) async {
final previousTrack = transceiver.sender.track;

// don't stop the track if we are re-publishing the same track
if (previousTrack != null && previousTrack != track?.mediaTrack) {
await previousTrack.stop();
}

await transceiver.sender.replaceTrack(track?.mediaTrack);

if (track is RtcLocalAudioTrack) {
await _updateAudioPublishOptions(
track.trackType,
trackPublishOptions ?? const RtcTrackPublishOptions(),
);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find all call sites of _updateTransceiver
rg -n '_updateTransceiver' --type=dart -B 2 -A 5 packages/stream_video/lib/src/webrtc/

Repository: GetStream/stream-video-flutter

Length of output: 2814


🏁 Script executed:

#!/bin/bash
# Find _addTransceiver to understand cloning behavior
rg -n '_addTransceiver' --type=dart -B 3 -A 15 packages/stream_video/lib/src/webrtc/rtc_manager.dart

Repository: GetStream/stream-video-flutter

Length of output: 2820


🏁 Script executed:

#!/bin/bash
# Find publishAudioTrack implementation
rg -n 'publishAudioTrack' --type=dart -B 2 -A 20 packages/stream_video/lib/src/webrtc/rtc_manager.dart | head -100

Repository: GetStream/stream-video-flutter

Length of output: 3888


🏁 Script executed:

#!/bin/bash
# Check line 361 context to understand the null track scenario
sed -n '355,370p' packages/stream_video/lib/src/webrtc/rtc_manager.dart

Repository: GetStream/stream-video-flutter

Length of output: 573


🏁 Script executed:

#!/bin/bash
# Check the full publishVideoTrack implementation
sed -n '780,860p' packages/stream_video/lib/src/webrtc/rtc_manager.dart

Repository: GetStream/stream-video-flutter

Length of output: 2621


_updateTransceiver: assigning uncloned track to sender after first update.

The initial transceiver created via _addTransceiver uses a cloned track (line 918). However, when _updateTransceiver is called from publishAudioTrack or publishVideoTrack with a cached transceiver, the condition at line 985 compares the previously-assigned clone to the original track, finds them different, stops the clone, and then calls replaceTrack(track?.mediaTrack) to assign the original track to the sender (line 989). This means after the first update, the transceiver references the uncloned original track instead of a clone, violating the design principle stated at line 916-917 ("create a clone of the track as otherwise the same trackId will appear in the SDP in multiple transceivers"). If multiple transceivers exist for the same track type (e.g., different audio bitrate profiles), they would share the same original track across senders.

The fix should either clone the track in _updateTransceiver before replacing, or pass a clone when calling _updateTransceiver from the publish methods.

}) {
return appleConfiguration ??
basePolicy.getAppleConfiguration(defaultToSpeaker: defaultToSpeaker);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't appleConfiguration be a callback so we can forward defaultToSpeaker like this:

    return appleConfigurationGetter?.call(defaultToSpeaker: defaultToSpeaker) ??
        basePolicy.getAppleConfiguration(defaultToSpeaker: defaultToSpeaker);

I also wonder if it isn't easier to make CustomAudioPolicy abstract so devs can extend it. Or are you affraid for breaking changes here?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants