Encode Forge v0.4.0 - Audio Normalization, Performance Updates, and Bug Fixes
This release consolidates features developed across branches v0.3.2 and v0.3.3 into a stable production release. Version 0.4.0 focuses on audio normalization capabilities, GPU-accelerated AI subtitle generation, UI/UX polish, and significant performance optimizations.
Highlights
Audio Normalization
Ensure consistent audio levels across all your media files with the new audio normalization feature.
- Integrated directly into the encoding workflow
- Configurable through settings
- Works with all supported hardware encoders
GPU-Accelerated AI Subtitles
10x-20x faster subtitle generation with intelligent GPU support!
- Automatic GPU Detection - Detects NVIDIA CUDA, AMD ROCm, or Apple Silicon
- Smart PyTorch Installation - Downloads the correct PyTorch version for your hardware
- Fallback Support - Automatically uses CPU if no GPU is available
- Universal Compatibility - Works on Windows (CUDA only), macOS (Intel & Apple Silicon), and Linux
Enhanced User Interface
A more polished and professional interface with improved visual depth and better component scaling.
- Enhanced dark theme with improved contrast
- Minimum component sizes for better readability
- Better layout scaling across different screen sizes
- More consistent styling throughout the application
Performance Improvements
Faster startup times and reduced memory usage through intelligent lazy loading.
- Lazy Module Initialization - Core modules load only when needed
- Reduced Memory Footprint - Lower RAM usage, especially on startup
- Faster Application Launch - Improved initialization sequence
- Better Process Handling - Enhanced Python API performance
What's New
Added Features
Audio Processing
- Audio Normalization Flag (
998f7ae)- Normalize audio levels during encoding
- Configurable normalization settings
- Works alongside all hardware acceleration options
AI Subtitle Generation
-
Intelligent GPU Support (
fa8d13a)- Automatic detection of NVIDIA, AMD, and Apple Silicon GPUs
- Platform-specific PyTorch installation (CUDA, ROCm, MPS)
- Dramatic speed improvements on compatible hardware
-
Enhanced Whisper Processing (
e34463c)- Optimized device selection
- Better resource management
- Support for macOS M1/M2/M3 GPUs
UI/UX Improvements
- Visual Polish (
1687422)- Added depth to UI components
- Minimum component sizes for consistency
- Improved component spacing and alignment
- Better visual hierarchy
Performance Enhancements
- Lazy Loading (
9de93c6,ed3800e)- Whisper manager loads on-demand
- Core Python modules initialize when needed
- Faster application startup
- Lower initial memory usage
Bug Fixes
- Whisper AI Setup (
eed524c)- Fixed initialization dialog issues
- Improved error handling
- Better feedback during installation
Installation
Download
Choose the installer for your platform:
Note: macOS builds hang on opening. I don't use macOS and have limiited access to one.
- Windows:
EncodeForge-0.4.0-windows-x64.exeor.msi - Linux (Debian/Ubuntu):
encodeforge_0.4.0-1_amd64.deb - Linux (Fedora/RHEL):
encodeforge-0.4.0-1.x86_64.rpm - macOS (Intel):
EncodeForge-0.4.0-macos-x64.dmg - macOS (Apple Silicon):
EncodeForge-0.4.0-macos-arm64.dmg
First-Time Setup
On first launch, EncodeForge will automatically:
- Download and install FFmpeg (~100-150 MB)
- Install required Python libraries (~50 MB)
- Configure everything with a progress window
This one-time setup takes 2-5 minutes depending on your internet connection.
Optional: AI Subtitle Generation
Install GPU-accelerated Whisper AI later through:
Tools → Setup AI Subtitles
The installer will automatically detect your GPU and download the appropriate PyTorch version!
Known Issues
None at this time. Please report issues at: https://github.com/SirStig/EncodeForge/issues