Fix ultra-long context prefilling in Qwen3 MoE GGUF models#1624
Closed
guoqingbao wants to merge 0 commit intoEricLBuehler:masterfrom
Closed
Fix ultra-long context prefilling in Qwen3 MoE GGUF models#1624guoqingbao wants to merge 0 commit intoEricLBuehler:masterfrom
guoqingbao wants to merge 0 commit intoEricLBuehler:masterfrom
Conversation
Code Metrics Report=============================================================================== Language Files Lines Code Comments Blanks =============================================================================== C Header 3 63 54 0 9 CSS 1 473 408 14 51 Dockerfile 1 39 22 8 9 HTML 1 78 64 5 9 JavaScript 7 1397 1068 180 149 JSON 22 410 407 0 3 Makefile 1 6 5 0 1 Python 102 5660 4631 298 731 Shell 1 63 26 18 19 Plain Text 3 3723 0 2413 1310 TOML 23 877 809 11 57 YAML 2 21 19 2 0 ------------------------------------------------------------------------------- Jupyter Notebooks 3 0 0 0 0 |- Markdown 2 77 32 31 14 |- Python 2 205 178 1 26 (Total) 282 210 32 40 ------------------------------------------------------------------------------- Markdown 74 6981 0 5227 1754 |- BASH 19 299 260 24 15 |- JSON 11 523 523 0 0 |- Python 14 521 434 35 52 |- Rust 32 1320 1108 36 176 |- TOML 2 75 63 0 12 (Total) 9719 2388 5322 2009 ------------------------------------------------------------------------------- Rust 422 156830 138527 3993 14310 |- Markdown 200 4348 285 3498 565 (Total) 161178 138812 7491 14875 =============================================================================== Total 666 176621 146040 12169 18412 =============================================================================== |
Owner
|
@guoqingbao thanks, I just merged EricLBuehler/candle#94. |
Contributor
Author
The bug in |
Contributor
Author
This PR should address problems in long context (>65535 or >65535/topk) inference for all gguf models since they all quantize inputs into q8_1 via quantize_q8_1 function. |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
This PR fixes a long-context prefill issue in GGUF models (including Qwen3-MoE) by updating the underlying Candle library.
A full explanation of why this change is necessary can be found here:
EricLBuehler/candle-vllm#256
EricLBuehler/candle-vllm#255
Changes for candle updates available here:
https://github.com/EricLBuehler/candle/pull/94/files