Skip to content

Add tensor IPC transfer mechanism for multimodal data#32104

Open
brandonpelfrey wants to merge 48 commits intovllm-project:mainfrom
brandonpelfrey:tensor-ipc
Open

Add tensor IPC transfer mechanism for multimodal data#32104
brandonpelfrey wants to merge 48 commits intovllm-project:mainfrom
brandonpelfrey:tensor-ipc

Conversation

@brandonpelfrey
Copy link

@brandonpelfrey brandonpelfrey commented Jan 11, 2026

Introduce Multimodal Content Tensor IPC/SHMEM Data Path

Following on from a request to break down the RFC/PR in #31925 , this PR introduces a IPC/SHMEM pathway for sending multimodal content from API Server -> CoreEngine processes via multiprocessing Queues. Part of the intention of this change is to reduce the number of changes in the original PR and introduce easier-to-review components which are required for the complete solution.

Note that this pathway is only used when the multimodal processing cache is disabled. Functionally this is because the cache mechanism replaces tensors with integers which causes the tensors to not go over this new multiprocessing queue mechanism. While this is a known limitation, this is still useful in many situations where a single input prompt task, e.g. video captioning, is needed.

Purpose

In the above-mentioned RFC/PR, we have demonstrated a method for enable multi-GPU scaling in video-decode heavy workloads. This requires a means of passing HW video decode results (sitting in VRAM in the API Server process) to CoreEngine process(es). When utilized with CUDA-device tensors, this IPC/SHMEM mechanism can provide a fast mechanism for data transfer and avoids any GPU->CPU->GPU copies.

Test Plan

For testing, I am depending on existing CI testing. Functional testing includes both a vllm serve + vllm bench combination (see below) as well as utilizing this PR with the above-mentioned PR to demonstrate that it will also work in the GPU zero-copy case.

Serve command and Bench Commands

vllm serve nvidia/cosmos-reason1-7b \
    --limit-mm-per-prompt '{\"video\": 1}' \
    --allowed-local-media-path / \
    --enable-log-requests --disable-log-stats \
    --mm-processor-cache-gb 0 \
    --trust-remote-code \
    --tensor-parallel-size 1 \
    --max-model-len 65536 \
    --gpu-memory-utilization 0.6 \
    --no-enforce-eager \
    --max-num-seqs 64 \
    --no-enable-prefix-caching \
    --api-server-count 3 \
    --media-io-kwargs '{\"video\":{\"num_frames\":10}}' \
    --mm-processor-kwargs '{\"size\":{\"shortest_edge\":100352,\"longest_edge\":151200}}' \
    --maximum-concurrent-videos 140

vllm bench serve \
    --endpoint /v1/chat/completions --backend openai-chat \
    --model nvidia/cosmos-reason1-7b \
    --dataset-name sharegpt \
    --dataset-path '$DATASET_PATH' \
    --save-result \
    --save-detailed \
    --disable-shuffle \
    --num-warmups 20 \
    --num-prompts 1000 --max-concurrency 400 \
    --sharegpt-output-len 128

Test Results

CPU: AMD EPYC 9124 16-Core Processor
GPU: H100
Memory: 512GB
uname -r: 6.14.0-37-generic

Without IPC Tensor Datapath enabled

============ Serving Benchmark Result ============
Successful requests:                     1000
Failed requests:                         0
Maximum request concurrency:             400
Benchmark duration (s):                  103.91
Total input tokens:                      16003
Total generated tokens:                  33806
Request throughput (req/s):              9.62
Output token throughput (tok/s):         325.35
Peak output token throughput (tok/s):    1636.00
Peak concurrent requests:                454.00
Total token throughput (tok/s):          479.36
---------------Time to First Token----------------
Mean TTFT (ms):                          34909.64
Median TTFT (ms):                        31583.43
P99 TTFT (ms):                           75718.16
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          86.46
Median TPOT (ms):                        26.23
P99 TPOT (ms):                           638.71
---------------Inter-token Latency----------------
Mean ITL (ms):                           224.89
Median ITL (ms):                         77.25
P99 ITL (ms):                            4442.88
==================================================

With IPC Tensor Datapath enabled

============ Serving Benchmark Result ============
Successful requests:                     1000
Failed requests:                         0
Maximum request concurrency:             400
Benchmark duration (s):                  104.25
Total input tokens:                      16003
Total generated tokens:                  33468
Request throughput (req/s):              9.59
Output token throughput (tok/s):         321.05
Peak output token throughput (tok/s):    2624.00
Peak concurrent requests:                464.00
Total token throughput (tok/s):          474.56
---------------Time to First Token----------------
Mean TTFT (ms):                          34538.27
Median TTFT (ms):                        29643.85
P99 TTFT (ms):                           76100.64
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          120.09
Median TPOT (ms):                        75.44
P99 TPOT (ms):                           856.38
---------------Inter-token Latency----------------
Mean ITL (ms):                           255.97
Median ITL (ms):                         77.76
P99 ITL (ms):                            5233.55
==================================================

After multiple runs, it appears that the performance is approximately identical (within noise).


Essential Elements of an Effective PR Description Checklist
  • [*] The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • [*] The test plan, such as providing test command.
  • [*] The test results, such as pasting the results comparison before and after, or e2e results
  • [*] (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Note

Implements zero-copy IPC for multimodal tensors and wires it through engine/client startup with configs and tests.

  • Introduces TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to send/receive CUDA and CPU tensors via torch.multiprocessing.Queue (per-engine queues), routed by set_target_engine; falls back to standard serialization when disabled/unavailable
  • Plumbs tensor queues through engine lifecycle: created per DP engine in vllm.v1.engine.utils, included in handshake metadata (index only), passed to EngineCoreProc/DPEngineCoreProc, and used by input decoders; CoreEngineClient configures encoder with queues and IPC setting
  • Adds configuration and flags: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; ModelConfig/arg parsing expose --maximum-concurrent-videos and --enable/--disable-multimodal-tensor-ipc; new env VLLM_MULTIMODAL_TENSOR_IPC (default True)
  • Starts API servers with shared tensor_queues via APIServerProcessManager; CLI serve passes queues
  • Adds comprehensive tests in tests/v1/test_tensor_ipc_queue.py for CUDA/CPU IPC, multiple producers, buffer management, and IPC disablement
  • Minor fixes: base64-encode image tensors on CPU; only pin memory for CPU tensors during concatenation

Written by Cursor Bugbot for commit d527841933b4dcd62a95d7e1ab58455f9b0cc88f. This will update automatically on new commits. Configure here.


Note

Enables zero-copy transfer of multimodal tensors between API servers and engine cores using per-engine torch.multiprocessing.Queue.

  • Introduces TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via IPC queues (with fallback to standard serialization)
  • Plumbs tensor queues through engine lifecycle: creation in vllm.v1.engine.utils, inclusion of queue index in handshake, passed into EngineCoreProc/DPEngineCoreProc, and used by core input decoders; client sets target engine for routing
  • Adds config and controls: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; ModelConfig/args expose --maximum-concurrent-videos and --enable/--disable-multimodal-tensor-ipc; new env VLLM_MULTIMODAL_TENSOR_IPC
  • API servers launched with shared tensor_queues via APIServerProcessManager; CLI serve passes queues to workers
  • Adds tests tests/v1/test_tensor_ipc_queue.py covering CUDA/CPU IPC, multiple producers, buffering, and IPC disablement
  • Minor fixes: base64-encode image tensors from CPU; only pin memory for CPU tensors during concatenation

Written by Cursor Bugbot for commit 1a1460b05b9cc3c4b695682ef36aa3d5c5c959cc. This will update automatically on new commits. Configure here.


Note

Introduces a shared-memory IPC path for multimodal tensors to avoid serialization and GPU↔CPU copies.

  • Adds TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue (fallback to standard serialization when disabled/unavailable)
  • Plumbs tensor queues through engine lifecycle and handshake: created per DP engine, queue index included in handshake, passed to EngineCoreProc/DPEngineCoreProc, used by core input decoders; client sets target engine for routing and initializes encoder with queues
  • CLI/config/env: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; surfaced via ModelConfig and args --maximum-concurrent-videos and --[en|dis]able-multimodal-tensor-ipc; new env VLLM_MULTIMODAL_TENSOR_IPC
  • API server manager/serve path passes shared tensor_queues to workers
  • Adds tests tests/v1/test_tensor_ipc_queue.py covering CUDA/CPU IPC, multiple producers, buffering, and IPC disablement
  • Minor fixes: base64-encode image tensors from CPU; only pin memory for CPU tensors during concatenation

Written by Cursor Bugbot for commit 29a2efb85b1118a8e306489139ca0315c7162049. This will update automatically on new commits. Configure here.


Note

Cursor Bugbot is generating a summary for commit 326ce4ad86e3928bc9fedaf9458ae2835aaece31. Configure here.


Note

Introduces a shared-memory IPC path for multimodal tensors to avoid serialization and GPU↔CPU copies.

  • Adds TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to route tensors via per-engine torch.multiprocessing.Queue using set_target_engine, with fallback to standard serialization
  • Plumbs tensor queues through engine lifecycle: created in vllm/v1/engine/utils.py, queue index included in handshake, passed to EngineCoreProc/DPEngineCoreProc, and used by core input decoders; client configures encoder with queues and IPC setting in vllm/v1/engine/core_client.py
  • Config/CLI/env: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; ModelConfig wires through; adds --maximum-concurrent-videos and --enable/--disable-multimodal-tensor-ipc; new env VLLM_MULTIMODAL_TENSOR_IPC
  • API servers launched with shared tensor_queues via APIServerProcessManager; CLI serve passes queues
  • Adds tests tests/v1/test_tensor_ipc_queue.py covering CUDA/CPU IPC, multiple producers, buffering, and IPC disablement
  • Minor fixes: base64-encode image tensors from CPU; only pin memory for CPU tensors during concatenation

Written by Cursor Bugbot for commit 326ce4ad86e3928bc9fedaf9458ae2835aaece31. This will update automatically on new commits. Configure here.


Note

Introduces a zero-copy IPC path for multimodal tensors routed over per-engine torch.multiprocessing.Queues and integrates it across client/engine startup and request handling.

  • Adds TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to send/receive CPU and CUDA tensors via IPC queues (with fallback to standard serialization); supports request context and buffer cleanup
  • Creates per-engine tensor queues in vllm.v1.engine.utils, includes queue index in handshake, passes queues into EngineCoreProc/DPEngineCoreProc, and uses them in input decoders; client sets target engine, request context, and initializes encoder with queues
  • Adds abort_requests cleanup of orphaned tensors in engine core
  • Exposes controls: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; ModelConfig wires through; CLI adds --maximum-concurrent-videos and --enable/--disable-multimodal-tensor-ipc; env var VLLM_MULTIMODAL_TENSOR_IPC (default True)
  • API server workers launched with shared tensor_queues via APIServerProcessManager; CLI serve passes queues
  • Adds tests tests/v1/test_tensor_ipc_queue.py covering CUDA/CPU IPC, multiple producers, buffering, cleanup, and IPC disablement

Written by Cursor Bugbot for commit 8ead3dd36fced6545f72d60b73be7df26aee9670. This will update automatically on new commits. Configure here.


Note

Enables zero-copy transfer of multimodal tensors between API servers and engine cores using per-engine torch.multiprocessing.Queue.

  • Introduces TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via IPC queues, with request context, buffering, and cleanup fallback to standard serialization
  • Plumbs tensor queues through engine lifecycle: created in vllm/v1/engine/utils.py, queue index included in handshake, passed to EngineCoreProc/DPEngineCoreProc, and used by input decoders; client sets target engine and configures encoder
  • Adds request abort cleanup in engine core to remove orphaned tensors
  • Configuration and controls: MultiModalConfig gains max_concurrent_videos and multimodal_tensor_ipc; ModelConfig wires through; CLI adds --maximum-concurrent-videos and --enable/--disable-multimodal-tensor-ipc; env VLLM_MULTIMODAL_TENSOR_IPC
  • API servers launched with shared tensor_queues via APIServerProcessManager; serve passes queues
  • Adds tests tests/v1/test_tensor_ipc_queue.py covering CUDA/CPU IPC, multiple producers, buffering/cleanup, and IPC disablement

Written by Cursor Bugbot for commit 96e9bcf. This will update automatically on new commits. Configure here.


Note

Cursor Bugbot is generating a summary for commit 053716e. Configure here.


Note

Introduces a shared-memory IPC path for multimodal tensors and wires it through engine/client startup and request handling.

  • Adds TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue (zero-copy), with request context, buffering, and fallback to standard serialization
  • Creates per-engine tensor_queues in engine startup; includes queue index in handshake; passes queues into EngineCoreProc/DPEngineCoreProc; input decoders consume from queues; engine abort cleans orphaned tensors
  • Core client sets target engine for routing and initializes encoder with queues; API server manager/CLI serve pass shared tensor_queues to workers
  • Adds controls: multimodal_tensor_ipc in MultiModalConfig/ModelConfig, CLI flags --enable/--disable-multimodal-tensor-ipc, and env VLLM_MULTIMODAL_TENSOR_IPC (default True)
  • New tests tests/v1/test_tensor_ipc_queue.py cover CUDA/CPU IPC, multi-producer queueing, buffer management/cleanup, and IPC disablement

Written by Cursor Bugbot for commit 053716e. This will update automatically on new commits. Configure here.


Note

Introduces a zero-copy IPC pathway for multimodal tensors and integrates it across the engine/client lifecycle.

  • Adds TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue with request context, buffering, and cleanup; falls back to standard serialization when disabled/unavailable
  • Plumbs tensor_queues through startup/handshake: created per DP engine in vllm.v1.engine.utils, queue index included in handshake, passed to EngineCoreProc/DPEngineCoreProc, and consumed by core input decoders; client sets target engine and request context during encoding
  • Adds abort-time tensor cleanup in EngineCoreProc.abort_requests
  • Exposes controls: multimodal_tensor_ipc in MultiModalConfig/ModelConfig, CLI --enable/--disable-multimodal-tensor-ipc, and env VLLM_MULTIMODAL_TENSOR_IPC; serve and APIServerProcessManager pass tensor_queues to workers
  • New tests tests/v1/test_tensor_ipc_queue.py cover CUDA/CPU IPC, multi-producer behavior, buffer management/cleanup, and IPC disablement

Written by Cursor Bugbot for commit eefe86b. This will update automatically on new commits. Configure here.


Note

Cursor Bugbot is generating a summary for commit c756621. Configure here.


Note

Introduces a shared-memory IPC pathway for multimodal tensors and integrates it across the client/engine lifecycle.

  • Adds TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue (zero-copy), with request context, buffering, and cleanup; falls back to standard serialization
  • Plumbs tensor_queues through startup/handshake and engine core: queues created per DP engine, queue index included in handshake, passed into EngineCoreProc/DPEngineCoreProc, and consumed by input decoders; client sets target engine/request context during encoding; abort_requests now cleans orphaned tensors
  • Adds controls: multimodal_tensor_ipc in MultiModalConfig/ModelConfig, CLI --enable/--disable-multimodal-tensor-ipc, and env VLLM_MULTIMODAL_TENSOR_IPC; serve and APIServerProcessManager pass tensor_queues to API workers
  • Fixes decoding of non-multimodal torch.Tensor fields when IPC is enabled by handling TensorIpcHandle dicts in _decode_tensor
  • Adds comprehensive tests (tests/v1/test_tensor_ipc_queue.py, new cases in test_serial_utils.py) covering CPU/CUDA IPC, multi-producer behavior, buffer management/cleanup, IPC disablement, and encoder request context

Written by Cursor Bugbot for commit c756621. This will update automatically on new commits. Configure here.


Note

Introduces a shared-memory IPC path for multimodal tensors to avoid serialization and GPU↔CPU copies.

  • Adds TensorIpcData/TensorIpcHandle and extends MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue, with request context, buffering, and cleanup (fallback to standard serialization)
  • Wires queues through startup/handshake (EngineZmqAddresses, EngineHandshakeMetadata), passes into EngineCoreProc/DPEngineCoreProc, and uses them in core input decoders; client sets target engine and request context during encoding; engine abort path cleans orphaned tensors
  • Exposes control via MultiModalConfig.multimodal_tensor_ipc, ModelConfig plumbing, --[enable|disable]-multimodal-tensor-ipc flag, and VLLM_MULTIMODAL_TENSOR_IPC env; CLI serve/API server manager pass tensor_queues
  • Adds tests covering CUDA/CPU IPC, multi-producer behavior, buffer management/cleanup, IPC disabled mode, and non-multimodal tensor fields

Written by Cursor Bugbot for commit c756621. This will update automatically on new commits. Configure here.


Note

Implements a shared-memory IPC path for multimodal tensors and wires it through engine/client startup, request handling, and cleanup.

  • Introduces TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to route CPU/CUDA tensors via per-engine torch.multiprocessing.Queue (with request context, buffering, and abort-time cleanup); falls back to standard serialization when disabled
  • Creates per-engine tensor_queues, passes queue index in handshake, and plumbs queues into EngineCoreProc/DPEngineCoreProc; core input decoders use the queues, and abort_requests removes orphaned tensors
  • Client configures encoder with queues and IPC setting; adds request-scoped encoding context to set target engine and request ID
  • Adds config/CLI/env controls: multimodal_tensor_ipc in MultiModalConfig/ModelConfig, --enable/--disable-multimodal-tensor-ipc, and VLLM_MULTIMODAL_TENSOR_IPC; serve and APIServerProcessManager pass tensor_queues to API workers
  • Fixes decoding for non-multimodal torch.Tensor fields when IPC is enabled
  • Adds comprehensive tests (tests/v1/test_tensor_ipc_queue.py, new cases in test_serial_utils.py) covering CPU/CUDA IPC, multi-producer behavior, buffer management/cleanup, and IPC disablement

Written by Cursor Bugbot for commit c756621. This will update automatically on new commits. Configure here.


Note

Cursor Bugbot is generating a summary for commit c756621. Configure here.


Note

Introduces a shared-memory IPC path for multimodal tensors and integrates it across engine/client lifecycle.

  • Adds TensorIpcData/TensorIpcHandle and updates MsgpackEncoder/MsgpackDecoder to send/receive tensors via per-engine torch.multiprocessing.Queue (routing via set_target_engine, request context, buffering, and cleanup; fallback to standard serialization). Fixes decoding of non-multimodal torch.Tensor fields when IPC is enabled.
  • Wires queues through startup/handshake: created per DP engine in vllm/v1/engine/utils.py, queue index included in handshake, passed into EngineCoreProc/DPEngineCoreProc, and consumed by core input decoders; client configures encoder with queues and uses a request-scoped context.
  • Adds abort-time tensor cleanup in EngineCoreProc.abort_requests and passes tensor_queues to API workers in serve/APIServerProcessManager.
  • Exposes controls: multimodal_tensor_ipc in MultiModalConfig/ModelConfig, CLI --enable/--disable-multimodal-tensor-ipc, and env VLLM_MULTIMODAL_TENSOR_IPC.
  • New tests (tests/v1/test_tensor_ipc_queue.py, plus cases in test_serial_utils.py) cover CPU/CUDA IPC, multi-producer behavior, buffer management/cleanup, IPC disabled mode, and encoder request context.

Written by Cursor Bugbot for commit c756621. This will update automatically on new commits. Configure here.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant feature: an IPC/SHMEM pathway for multimodal tensors to improve performance in multi-GPU setups. The changes are extensive, touching configuration, argument parsing, engine core logic, and serialization utilities. The addition of comprehensive tests for the new IPC queue functionality is commendable. I've identified a critical issue in the engine core logic that could break data parallelism for non-MoE models, along with a couple of important bug fixes for handling tensors on CUDA devices in multimodal data processing.

@mergify
Copy link

mergify bot commented Jan 11, 2026

Hi @brandonpelfrey, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

@mergify
Copy link

mergify bot commented Jan 11, 2026

Hi @brandonpelfrey, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy or markdownlint failing?
mypy and markdownlint are run differently in CI. If the failure is related to either of these checks, please use the following commands to run them locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10
# For markdownlint
pre-commit run --hook-stage manual markdownlint

@DarkLight1337
Copy link
Member

Just to be sure , the benchmarks in the PR description are run without GPU preprocessing right?

@brandonpelfrey
Copy link
Author

Just to be sure , the benchmarks in the PR description are run without GPU preprocessing right?

Correct. There is no GPU preprocessing on the API Server. Note, I'm resolving some of the bot-identified issues at the moment, formatting etc.

@brandonpelfrey
Copy link
Author

Force-pushed purely to resolve DCO.

@mergify
Copy link

mergify bot commented Feb 24, 2026

Documentation preview: https://vllm--32104.org.readthedocs.build/en/32104/

@mergify mergify bot added documentation Improvements or additions to documentation ci/build deepseek Related to DeepSeek models llama Related to Llama models new-model Requests to new models performance Performance-related issues qwen Related to Qwen models gpt-oss Related to GPT-OSS models nvidia rocm Related to AMD ROCm labels Feb 24, 2026
@mergify mergify bot added the cpu Related to CPU backends label Feb 24, 2026
@brandonpelfrey
Copy link
Author

Apologies for pinging many reviewers. I have made a mistake in my merge from main. I will resolve this then request the correct reviewers again.

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
@mergify
Copy link

mergify bot commented Feb 25, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @brandonpelfrey.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
@brandonpelfrey
Copy link
Author

@njhill I've addressed your concerns. Please let me know if I've missed or misunderstood anything. As mentioned above, there was a mistake which has added a number of other reviewers, they could/should be removed.

@mergify
Copy link

mergify bot commented Feb 28, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @brandonpelfrey.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@brandonpelfrey apologies again for taking so long to re-review. Again it's quite a large PR with nontrivial changes to core parts of the code, so needed to dedicate some time to it.

Generally it would be appreciated if you could spend more time on self-review.

And it would still be good to understand how we are thinking about reconciling this with the existing mm shm-based tensor propagation (ie from #20452).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still feel that this can be better decoupled. For example, the encoder/decoder should not need to know about the request id, tensor id, etc. that can all be handled by the TensorIpcSender/TensorIpcReciever.

The handling in these should be very similar to how aux_buffers are already handled, just that this list of buffers is consumed/provided by the ipc sender/receivers (possibly via callbacks), and all their logic can live in the other file.

Also similar to the interface for pickle PEP 574 https://peps.python.org/pep-0574/

Comment on lines +441 to +454
if isinstance(arr, TensorIpcHandle):
return self._decode_ipc_queue_tensor(arr)
# Check if this is a dict that represents a TensorIpcHandle
# (msgspec serializes dataclasses as dicts without type info)
if (
isinstance(arr, dict)
and "tensor_id" in arr
and "shape" in arr
and "dtype" in arr
and "device" in arr
):
# Convert dict to TensorIpcHandle and decode it
handle = TensorIpcHandle(**arr)
return self._decode_ipc_queue_tensor(handle)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think these will ever happen. TensorIpcHandle will get serialized by msgspec as a list.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

action item: I'll use a unit test + coverage to ensure the logic is actually exercised.

Comment on lines +526 to +541
if isinstance(obj, TensorIpcHandle):
return self._decode_ipc_queue_tensor(obj)
# Check if this is a dict that represents a TensorIpcHandle
# (msgspec serializes dataclasses as dicts without type info
# in nested structures)
if (
isinstance(obj, dict)
and "tensor_id" in obj
and "shape" in obj
and "dtype" in obj
and "device" in obj
):
# Convert dict to TensorIpcHandle and decode it
# Handle both new format (with request_id) and old format (without)
handle = TensorIpcHandle(**obj)
return self._decode_ipc_queue_tensor(handle)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same comment as above, I don't think this will actually happen. But it doesn't look like you cover the list case here?

Comment on lines +554 to +561
def cleanup_request_tensors(self, request_id: str) -> int:
"""Remove all orphaned tensors associated with a request.

Pass-through to the TensorIpcReceiver. Returns 0 if no receiver.
"""
if self.tensor_ipc_receiver is None:
return 0
return self.tensor_ipc_receiver.cleanup_request_tensors(request_id)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be handled outside of this class (per other comment this class shouldn't need to know about request_id)

@mergify
Copy link

mergify bot commented Mar 7, 2026

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @brandonpelfrey.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
@mergify
Copy link

mergify bot commented Mar 9, 2026

Hi @brandonpelfrey, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy failing?
mypy is run differently in CI. If the failure is related to this check, please use the following command to run it locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
@mergify
Copy link

mergify bot commented Mar 9, 2026

Hi @brandonpelfrey, the pre-commit checks have failed. Please run:

uv pip install pre-commit
pre-commit install
pre-commit run --all-files

Then, commit the changes and push to your branch.

For future commits, pre-commit will run automatically on changed files before each commit.

Tip

Is mypy failing?
mypy is run differently in CI. If the failure is related to this check, please use the following command to run it locally:
# For mypy (substitute "3.10" with the failing version if needed)
pre-commit run --hook-stage manual mypy-3.10

Signed-off-by: Brandon Pelfrey <bpelfrey@nvidia.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

frontend multi-modality Related to multi-modality (#4194) performance Performance-related issues v1

Projects

Status: Todo
Status: No status
Status: No status
Status: To Triage

Development

Successfully merging this pull request may close these issues.

5 participants