Skip to content

[Feature Request] vLLM / High-Performance Inference Engine Support #3

@Ki6an

Description

@Ki6an

hey! thanks for sharing the paper and the approach.

using this in production with the current huggingface-only inference stack is a bit tricky since it slows things down. it would be great to apply this technique with vllm / sglang instead.

the main challenge is that the per-step logit fusion (logaddexp before sampling) does not map cleanly onto vllm’s standard model or speculative decoding patterns.

do you have any plans to support vllm or similar engines, or any recommendations to make this faster? happy to help contribute if you already have a direction in mind.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions