Skip to content

Conversation

@stmatengss
Copy link
Collaborator

Description

Type of Change

  • Types
    • Bug fix
    • New feature
      • Transfer Engine
      • Mooncake Store
      • Mooncake EP
      • Integration
      • P2P Store
      • Python Wheel
    • Breaking change
    • CI/CD
    • Documentation update
    • Other

How Has This Been Tested?

Checklist

  • I have performed a self-review of my own code.
  • I have updated the documentation.
  • I have added tests to prove my changes are effective.

Signed-off-by: Teng Ma <sima.mt@alibaba-inc.com>
@gemini-code-assist
Copy link
Contributor

Note

Gemini is unable to generate a summary for this pull request due to the file types involved not being currently supported.

@stmatengss stmatengss changed the title [CI] support cuda 13.0.2 [CI] chore: support cuda 13.0.2 Dec 30, 2025
stmatengss and others added 3 commits December 30, 2025 20:38
Signed-off-by: Teng Ma <sima.mt@alibaba-inc.com>
@stmatengss
Copy link
Collaborator Author

@UNIDY2002 I can provide some failed logs for these test cases.

@stmatengss
Copy link
Collaborator Author

  File "/sgl-workspace/sglang/python/sglang/srt/mem_cache/storage/mooncake_store/mooncake_store.py", line 160, in __init__
    from mooncake.store import MooncakeDistributedStore
ImportError: libcudart.so.13: cannot open shared object file: No such file or directory

Wheels built with CUDA 13.0 may not be compatible with lower CUDA versions.

@UNIDY2002
Copy link
Collaborator

Do you think it a good idea to publish separate packages, like mooncake-transfer-engine-cu12, mooncake-transfer-engine-cu13?

@stmatengss
Copy link
Collaborator Author

Do you think it a good idea to publish separate packages, like mooncake-transfer-engine-cu12, mooncake-transfer-engine-cu13?

How about your opinion? @ShangmingCai @ykwd It seems cu12 and cu13 are two significantly different environments.

@ShangmingCai
Copy link
Collaborator

ShangmingCai commented Dec 31, 2025

I think maybe we can build mooncake-transfer-engine-cu13 for CUDA 13, and let the original one serve CUDA 12 as it is doing now.

Comment on lines -98 to +99
pip install torch==$version
pip install torch==$version --index-url https://download.pytorch.org/whl/cu130
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we learn how PyTorch handles this?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PyTorch essentially maintains its own registry for various CUDA versions, without depending on pypi.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants