Skip to content

Add asiai — benchmark CLI comparing MLX vs llama.cpp engines#4

Open
druide67 wants to merge 1 commit intoantranapp:mainfrom
druide67:add-asiai
Open

Add asiai — benchmark CLI comparing MLX vs llama.cpp engines#4
druide67 wants to merge 1 commit intoantranapp:mainfrom
druide67:add-asiai

Conversation

@druide67
Copy link
Copy Markdown

What is asiai?

asiai is an open-source CLI that benchmarks MLX engines against llama.cpp on Apple Silicon.

Why it fits this list: asiai directly compares MLX-based engines (LM Studio, mlx-lm, oMLX) against GGUF/llama.cpp (Ollama), showing the performance advantages of MLX on Metal.

Example results (M4 Pro 64GB, Qwen3-Coder-30B):

  • LM Studio MLX: 102 tok/s, 12.4W
  • Ollama llama.cpp: 69.8 tok/s, 15.4W

Links: GitHub · PyPI · Docs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant