Liquid Speech - Real-time speech-to-text transcription for Flutter on iOS and macOS using Apple's native SpeechAnalyzer API
A Flutter package that provides native iOS 26+ and macOS 26+ real-time speech-to-text transcription with graceful fallback support for older OS versions.
- ✅ Real-time speech-to-text transcription (iOS 26+, macOS 26+)
- ✅ Compiles on iOS 14+ and macOS 11+ for broad compatibility
- ✅ Raw transcript updates as the user speaks
- ✅ Simple, intuitive API with runtime availability checks
- ✅ Event-based architecture with streams
- ✅ Proper microphone permission handling
- ✅ Clean resource management and lifecycle handling
- iOS: Package compiles on iOS 14.0+ but only functions on iOS 26.0+
- macOS: Package compiles on macOS 11.0+ but only functions on macOS 26.0+
- Flutter: 3.9.0+
- Dart: 3.5.0+
false from isAvailable() on earlier versions. Use runtime checks to conditionally use the feature.
Add to your pubspec.yaml:
dependencies:
liquid_speech: ^0.1.0Then run:
flutter pub getThis package can be safely added to apps targeting older iOS and macOS versions. Use runtime checks to conditionally use the SpeechAnalyzer API based on the device's OS version:
import 'package:liquid_speech/liquid_speech.dart';
class ConditionalSpeechService {
late SpeechAnalyzerService _speechAnalyzer;
Future<bool> initializeSpeechAnalyzer() async {
_speechAnalyzer = SpeechAnalyzerService();
// Check if speech analyzer is available on this device
final isAvailable = await _speechAnalyzer.isAvailable();
if (isAvailable) {
print('✓ SpeechAnalyzer available (iOS 26+ / macOS 26+)');
return true;
} else {
print('✗ SpeechAnalyzer not available (needs iOS 26+ or macOS 26+)');
// Fall back to alternative STT solution
return false;
}
}
Future<void> transcribeAudio() async {
final isAvailable = await _speechAnalyzer.isAvailable();
if (!isAvailable) {
// Use alternative speech-to-text service
await _useAlternativeSTT();
return;
}
// Use native SpeechAnalyzer
final success = await _speechAnalyzer.startTranscription();
if (success) {
// Listen for transcription events
_speechAnalyzer.transcriptionEvents.listen((event) {
if (event.type == 'update') {
print('Transcript: ${event.transcript}');
}
});
}
}
Future<void> _useAlternativeSTT() async {
// Implement fallback speech-to-text (e.g., Google Speech-to-Text, etc.)
print('Using fallback speech-to-text service');
}
}- Always check
isAvailable()before using the package - Call it once at app startup and cache the result
- Provide a graceful fallback for older OS versions
- No conditional imports needed - the package compiles everywhere
Add to ios/Runner/Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access to transcribe your speech</string>Add to macos/Runner/Info.plist:
<key>NSMicrophoneUsageDescription</key>
<string>This app needs microphone access to transcribe your speech</string>import 'package:liquid_speech/liquid_speech.dart';
class MyApp extends StatefulWidget {
@override
State<MyApp> createState() => _MyAppState();
}
class _MyAppState extends State<MyApp> {
late SpeechAnalyzerService _speechAnalyzer;
String _transcript = '';
@override
void initState() {
super.initState();
_speechAnalyzer = SpeechAnalyzerService();
// Listen to transcription events
_speechAnalyzer.transcriptionEvents.listen((event) {
setState(() {
if (event.type == 'update' && event.transcript != null) {
_transcript = event.transcript!;
}
});
});
}
@override
void dispose() {
_speechAnalyzer.dispose();
super.dispose();
}
Future<void> _startRecording() async {
final success = await _speechAnalyzer.startTranscription();
if (success) {
print('Transcription started');
}
}
Future<void> _stopRecording() async {
final transcript = await _speechAnalyzer.stopTranscription();
print('Final transcript: $transcript');
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: const Text('Speech Analyzer')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: [
Text('Transcript: $_transcript'),
const SizedBox(height: 24),
ElevatedButton(
onPressed: _startRecording,
child: const Text('Start'),
),
ElevatedButton(
onPressed: _stopRecording,
child: const Text('Stop'),
),
],
),
),
);
}
}_speechAnalyzer.transcriptionEvents.listen((event) {
switch (event.type) {
case 'started':
print('Transcription started');
break;
case 'update':
print('Text: ${event.transcript}');
print('Is final: ${event.isFinal}');
break;
case 'stopped':
print('Transcription stopped. Final: ${event.transcript}');
break;
case 'error':
print('Error: ${event.error}');
break;
}
});final available = await _speechAnalyzer.isAvailable();
if (!available) {
print('Speech Analyzer not available on this device');
}Stream<TranscriptionEvent> transcriptionEvents- Stream of transcription eventsString currentTranscript- The current raw transcript as it's being spoken
Future<bool> isAvailable()- Check if speech analyzer is availableFuture<bool> startTranscription()- Start real-time transcriptionFuture<String?> stopTranscription()- Stop transcription and return final transcriptvoid dispose()- Clean up resources
class TranscriptionEvent {
String type; // 'started', 'update', 'stopped', 'error'
String? transcript; // Transcribed text
bool isFinal; // Whether transcript is final
DateTime timestamp; // When the event occurred
String? error; // Error message if type == 'error'
}- User starts recording by calling
startTranscription() - Microphone permission is requested if needed
- As user speaks,
transcriptionUpdateevents are emitted with partial transcripts - When user pauses/stops,
isFinal: trueis set stopTranscription()is called to finalize and get the complete transcript
Run the example app to see the package in action:
cd packages/liquid_speech/example
flutter runThe example demonstrates:
- Starting/stopping transcription
- Real-time transcript updates
- Event logging
- Error handling
- UI state management
SpeechAnalyzerService
├── Method Channel (com.liquid.speech/native)
├── Event Stream (TranscriptionEvent)
└── State Management
iOS/macOS:
SpeechAnalyzerPlugin (auto-registered via Flutter plugin system)
├── Static Handler Storage (keeps handler alive for lifetime of app)
└── SpeechAnalyzerHandler
├── AVAudioEngine (audio capture)
├── SpeechAnalyzer API
├── SpeechTranscriber (speech-to-text)
└── AsyncStream<AnalyzerInput> (audio streaming)
The plugin automatically registers on app startup via the Flutter plugin system and stores both the handler and channel as static variables. This ensures they remain alive for the entire app lifecycle and are available to handle method calls from the Dart side.
- Single Language: Currently hardcoded to
en_US. Future versions will support language selection. - iOS/macOS Only: Android support is not currently implemented.
Add the microphone permission to your Info.plist file (see Permissions section above).
Make sure you:
- Have added the package to
pubspec.yaml - Called
flutter pub get - Have microphone permissions granted
- Are listening to
transcriptionEventsstream before starting transcription
This usually means the native plugin files weren't properly copied. Try:
flutter clean
flutter pub get
flutter runContributions are welcome! Please file issues and submit pull requests on GitHub.
Built by Cleft AI LTD for WithAmber.com and the Amber Writing app. Open sourced for the community to enable real-time speech-to-text transcription on iOS and macOS.
MIT