From 8ae753683a418034bfe0783f52c56392ba74ed8d Mon Sep 17 00:00:00 2001 From: Jean-Marc Date: Sun, 29 Mar 2026 00:26:04 +0100 Subject: [PATCH] =?UTF-8?q?Add=20asiai=20=E2=80=94=20benchmark=20CLI=20com?= =?UTF-8?q?paring=20MLX=20vs=20llama.cpp=20engines?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 1 + 1 file changed, 1 insertion(+) diff --git a/README.md b/README.md index 43af4b2..fb518ec 100644 --- a/README.md +++ b/README.md @@ -70,3 +70,4 @@ An awesome list dedicated to the [MLX library](https://github.com/ml-explore/mlx - [ChatMLX](https://github.com/maiqingqiang/ChatMLX): ChatMLX is a large model real-time conversation app implemented using MLX - [Ph3iOSOnDeviceChatApp](https://inkysquid4.gumroad.com/l/lghejp): Source code to run Microsoft's Phi 3 Min 4K model completely on device - [fullmoon-ios](https://github.com/mainframecomputer/fullmoon-ios): fullmoon is an iOS app to chat with local large language models that’s optimized for Apple silicon and works on iPhone, iPad, and Mac. your chat history is saved locally, and you can customize the appearance of the app. +- [asiai](https://github.com/druide67/asiai): Multi-engine LLM benchmark & monitoring CLI for Apple Silicon. Compare MLX engines (LM Studio, mlx-lm, oMLX) against llama.cpp (Ollama) — tok/s, TTFT, power efficiency. Web dashboard and MCP server included.