Deep learning-based TTS engines can take a few seconds (or even up to 10 seconds for the slowest) to synthesize speech for a sentence. Some providers like Elevenlabs or the Android system TTS engine offer the ability to stream the content in order to reduce latency by playing as the audio is being generated. This would be useful every time the playback starts from scratch that is at the beginning, when the user selects a different voice, when the user jumps to a location too far for still being in the cache, etc.
At first glance, supporting streaming in read aloud navigators significantly increases the implementation complexity. Is it something we must do?