You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**31 findings resolved across CRITICAL, HIGH, MEDIUM, LOW, and INFO severities:**
53
+
### Native MLX Swift — Python No Longer Required for Inference
55
54
56
-
**Critical Fixes:**
57
-
-**API Keys to Keychain**: All AI backend API keys (OpenAI, Anthropic, Google, AWS, Azure, IBM) migrated from UserDefaults to macOS Keychain with automatic migration on first launch
55
+
The biggest change since v1.0: inference now runs entirely in Swift using `mlx-swift-lm`. The Python daemon subprocess has been removed.
58
56
59
-
**High Fixes:**
60
-
-**Command Validator Hardened**: Replaced naive `String.contains()` with `NSRegularExpression` word-boundary matching to prevent bypass via substrings
61
-
-**Python Import Validator**: Regex-based import validation with comment filtering prevents bypass via inline comments
62
-
-**Model Hash Verification**: SHA256 verification of downloaded models using CryptoKit
63
-
-**Buffered I/O**: 4096-byte chunk reading replaces byte-by-byte daemon communication for significant performance improvement
64
-
-**Task Cancellation**: All infinite `while true` loops replaced with `while !Task.isCancelled` for clean shutdown
-**Python still used for downloads only** — `huggingface_downloader.py` runs once when you first pull a model
62
+
-**2,726 lines of dead code removed** — `EthicalAIGuardian`, `AIBackendStatusMenu`, and all 4 `AIBackendManager` files deleted (none were called by the live app)
63
+
-**Code quality cleanup** — removed debug file writes from production, fixed force unwraps, replaced polling sleeps with proper event handling
0 commit comments