-
-
Notifications
You must be signed in to change notification settings - Fork 14.2k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
Your output of `python collect_env.py` here
🐛 Describe the bug
I ran some eval for Deepseek V3.2 comparing flashMLA and flashinfer on both bf16 and fp8, and find that flashMLA has noticeably worse accuracy than flashinfer, particularly at fp8. Note flashMLA fp8 uses fp8_ds_mla whereas flashinfer uses the standard fp8 format, which could be the reason for its suboptimal performance. However, even for f16, the flashMLA performance is no better than flashinfer across the board. This may be indicative of potential bug in flashMLA integration.
| KV Cache | Backend | Benchmark | pass@1 (avg-32) | majority@32 | pass@32 |
|---|---|---|---|---|---|
| bf16 | FlashMLA | AIME25 | 88.33% | 93.33% | 93.33% |
| GPQA-diamond | 82.89% | 85.86% | 96.46% | ||
| bf16 | FlashInfer | AIME25 | 90.83% | 93.33% | 100.00% |
| GPQA-diamond | 83.14% | 87.63% | 97.47% | ||
| fp8_ds_mla | FlashMLA | AIME25 | 81.98% | 88.33% | 90.00% |
| GPQA-diamond | 78.55% | 81.82% | 94.95% | ||
| fp8 | FlashInfer | AIME25 | 89.17% | 93.33% | 100.00% |
| GPQA-diamond | 83.55% | 86.36% | 96.46% |
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working