The free, open-source competitor to SuperWhisper
A powerful voice-to-text application for macOS that aims to match and exceed SuperWhisper's functionality - without the $8/month subscription. Powered by WhisperKit and optimized for Apple Silicon.
SuperWhisper is a fantastic app, but it costs $8/month (or $96/year). That's a lot for something that can runs entirely on your own hardware.
BetterFasterWhisper gives you the same experience:
- Same push-to-talk workflow
- Same local Whisper transcription
- Same auto-paste functionality
- But 100% free, forever.
No subscriptions. No API costs. No data collection. Just fast, private, local speech-to-text.
Everything SuperWhisper does, but free:
- 100% Free & Open Source - No $8/month subscription, no API costs, no hidden fees
- Fully Offline - All processing happens locally on your Mac using CoreML
- Apple Silicon Optimized - Uses WhisperKit with CoreML for blazing fast transcription on M1/M2/M3/M4
- Push-to-Talk - Hold Right Option key to record, release to transcribe (just like SuperWhisper)
- Auto-Paste - Transcribed text is automatically pasted where your cursor is
- Visual Feedback - Elegant overlay with animated waveform during recording and pulsing dots during transcription
- Media Control - Optionally pause playing media while recording
- Privacy First - Your audio never leaves your device
- Menu Bar App - Runs quietly in the background, always accessible
| Recording | Transcribing |
|---|---|
| Animated waveform bars | Pulsing dots indicator |
- macOS 13.0 (Ventura) or later
- Apple Silicon (M1/M2/M3/M4) - Required for CoreML acceleration
- ~2GB disk space - For the app and Whisper model (large-v3-turbo)
Download the latest .dmg from the Releases page.
- Xcode 15+ with Command Line Tools
- macOS 13.0 or later
# Clone the repository
git clone https://github.com/zeffut/BetterFasterWhisper.git
cd BetterFasterWhisper
# Open in Xcode
open BetterFasterWhisper.xcodeproj
# Or build from command line
xcodebuild -project BetterFasterWhisper.xcodeproj \
-scheme BetterFasterWhisper \
-configuration Release \
build
# The app bundle will be in DerivedData# Build, deploy to /Applications, and launch
xcodebuild -project BetterFasterWhisper.xcodeproj -scheme BetterFasterWhisper -configuration Debug build
pkill -9 -f BetterFasterWhisper
cp -R ~/Library/Developer/Xcode/DerivedData/BetterFasterWhisper-*/Build/Products/Debug/BetterFasterWhisper.app /Applications/
open /Applications/BetterFasterWhisper.app- Launch the app - It will appear in your menu bar (microphone icon)
- Grant permissions when prompted:
- Microphone - Required for recording audio
- Accessibility - Required for auto-paste functionality
- Wait for model download - The Whisper model (~1.5GB) will download automatically on first launch
- Start dictating!
- Hold Right Option (`) key to start recording
- Speak - You'll see an animated waveform overlay
- Release the key to stop recording
- Wait - Pulsing dots indicate transcription in progress
- Done - Text is automatically pasted at your cursor position
Access settings from the menu bar icon:
- Language - Default transcription language (French by default)
- Pause media when recording - Automatically pause Spotify/Music/videos while recording
BetterFasterWhisper/
├── App/
│ └── BetterFasterWhisper/
│ ├── Sources/
│ │ ├── App/
│ │ │ ├── BetterFasterWhisperApp.swift # App entry point
│ │ │ └── AppDelegate.swift # Push-to-talk, overlay, audio levels
│ │ ├── Core/
│ │ │ ├── Audio/
│ │ │ │ └── AudioRecorder.swift # AVAudioEngine recording
│ │ │ ├── Models/
│ │ │ │ ├── AppState.swift # Main app state (Observable)
│ │ │ │ ├── TranscriptionResult.swift
│ │ │ │ └── TranscriptionMode.swift
│ │ │ └── Services/
│ │ │ ├── WhisperService.swift # WhisperKit integration
│ │ │ ├── ModelManager.swift # Model download management
│ │ │ ├── HotkeyManager.swift # Global hotkey handling
│ │ │ ├── ClipboardManager.swift # Copy/paste operations
│ │ │ └── MediaControlManager.swift # Pause/resume media
│ │ └── UI/Screens/
│ │ ├── MenuBarView.swift # Menu bar dropdown
│ │ ├── SettingsView.swift # Settings window
│ │ └── RecordingView.swift
│ ├── Assets.xcassets/
│ ├── Info.plist
│ └── BetterFasterWhisper.entitlements
├── whisper-core/ # Rust library (unused in current version)
│ ├── src/
│ └── Cargo.toml
├── BetterFasterWhisper.xcodeproj
├── Package.swift
├── Scripts/
│ └── build.sh
├── LICENSE
└── README.md
┌─────────────────────────────────────────────────────────────┐
│ Menu Bar App │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ AppDelegate (NSApplicationDelegate) │ │
│ │ - Push-to-talk (Right Option key via CGEvent) │ │
│ │ - Audio level monitoring (AudioLevelManager) │ │
│ │ - Overlay window (AudioWaveformOverlay) │ │
│ └─────────────────────────────────────────────────────┘ │
│ │ │
│ ┌─────────────────────────▼───────────────────────────┐ │
│ │ AppState │ │
│ │ (Observable, @MainActor) │ │
│ │ - Recording state management │ │
│ │ - Transcription orchestration │ │
│ └───────────┬─────────────┬─────────────┬─────────────┘ │
│ │ │ │ │
│ ┌───────────▼──┐ ┌────────▼────────┐ ┌─▼──────────────┐ │
│ │AudioRecorder │ │ WhisperService │ │ClipboardManager│ │
│ │(AVAudioEngine)│ │ (WhisperKit) │ │ (NSPasteboard)│ │
│ └──────────────┘ └────────┬────────┘ └────────────────┘ │
└────────────────────────────┼────────────────────────────────┘
│
┌────────────▼────────────┐
│ WhisperKit │
│ (CoreML, Apple Silicon) │
│ Model: large-v3-turbo │
└─────────────────────────┘
| Component | Technology |
|---|---|
| UI Framework | SwiftUI |
| App Type | Menu Bar (LSUIElement) |
| Speech Recognition | WhisperKit by Argmax |
| ML Acceleration | CoreML (Apple Silicon) |
| Audio Recording | AVAudioEngine |
| Hotkey Detection | CGEvent (Carbon) |
| Media Control | MediaRemote.framework (private API) |
| Target | macOS 13.0+ |
BetterFasterWhisper uses the large-v3-turbo Whisper model via WhisperKit:
- Size: ~1.5GB
- Performance: Optimized for Apple Silicon via CoreML
- Languages: 100+ languages supported
- Download: Automatic on first launch
The model is downloaded to:
~/Library/Application Support/com.betterfasterwhisper/
| Permission | Reason |
|---|---|
| Microphone | Record audio for transcription |
| Accessibility | Simulate keyboard paste (Cmd+V) |
- Make sure Accessibility permission is granted in System Settings > Privacy & Security > Accessibility
- Try restarting the app after granting permission
- Check that the model has finished downloading (menu bar shows status)
- Ensure microphone permission is granted
- First transcription may be slower as the model loads into memory
- Subsequent transcriptions should be much faster
/usr/bin/log show --predicate 'subsystem == "com.betterfasterwhisper"' --last 1m --info --style compact- Customizable hotkey
- Multiple language quick-switch
- Transcription history
- Custom vocabulary/corrections
- Smaller model options for faster transcription
- Voice activity detection (auto-stop)
- Audio input device selection
- Onboarding experience
Contributions are welcome! Please feel free to submit a Pull Request.
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- OpenAI Whisper - The speech recognition model
- WhisperKit - Swift native Whisper implementation for Apple Silicon
- SuperWhisper - The app we're competing with (great UX, but we think speech-to-text should be free)
This project is not affiliated with OpenAI, Argmax, or SuperWhisper. BetterFasterWhisper is an independent open-source competitor built to provide a free alternative to paid voice-to-text solutions.
Stop paying $8/month for voice-to-text. Use BetterFasterWhisper.