This guide provides comprehensive instructions for installing TorchSparse with full cross-platform support, including Windows and Linux compatibility fixes, extensive version support, and automated dependency resolution.
For most users, we recommend using the pre-built wheel packages:
# Python 3.10 with CUDA 12.1 (most common)
pip install https://github.com/Deathdadev/torchsparse/releases/download/v2.1.0-cross-platform/torchsparse-2.1.0-cp310-cp310-win_amd64.whl
# Python 3.11 with CUDA 12.4 (latest)
pip install https://github.com/Deathdadev/torchsparse/releases/download/v2.1.0-cross-platform/torchsparse-2.1.0-cp311-cp311-win_amd64.whl# Python 3.10 with CUDA 12.1 (most common)
pip install https://github.com/Deathdadev/torchsparse/releases/download/v2.1.0-cross-platform/torchsparse-2.1.0-cp310-cp310-linux_x86_64.whl
# Python 3.11 with CUDA 12.4 (latest)
pip install https://github.com/Deathdadev/torchsparse/releases/download/v2.1.0-cross-platform/torchsparse-2.1.0-cp311-cp311-linux_x86_64.whl- Python: 3.8, 3.9, 3.10, 3.11, or 3.12
- PyTorch: 1.9.0+ to 2.5.0+ with CUDA support
- CUDA Toolkit: 11.1, 11.3, 11.6, 11.7, 11.8, 12.0, 12.1, or 12.4
- Platform-specific tools:
- Windows: Microsoft Visual Studio 2019 or 2022 with C++ build tools
- Linux: GCC 7.0+, build-essential, python3-dev
- Git: For cloning repositories
# Check Python version
python --version
# Check PyTorch and CUDA
python -c "import torch; print(f'PyTorch: {torch.__version__}, CUDA: {torch.version.cuda}')"
# Check CUDA toolkit
nvcc --version# Install vcpkg
git clone https://github.com/Microsoft/vcpkg.git C:\vcpkg
cd C:\vcpkg
.\bootstrap-vcpkg.bat
# Install sparsehash
.\vcpkg install sparsehash:x64-windows
# Set environment variable
$env:VCPKG_ROOT = "C:\vcpkg"# Download and extract sparsehash
Invoke-WebRequest -Uri "https://github.com/sparsehash/sparsehash/archive/refs/tags/sparsehash-2.0.4.zip" -OutFile "sparsehash.zip"
Expand-Archive -Path "sparsehash.zip" -DestinationPath "C:\"
Rename-Item "C:\sparsehash-sparsehash-2.0.4" "C:\sparsehash"
# Set include path
$env:INCLUDE = "$env:INCLUDE;C:\sparsehash\src"git clone https://github.com/sparsehash/sparsehash.git C:\sparsehash-src
cd C:\sparsehash-src
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=C:\sparsehash -A x64
cmake --build . --config Release
cmake --install .Choose the appropriate wheel for your Python version:
| Python Version | Download Link |
|---|---|
| Python 3.8 | torchsparse-2.1.0-cp38-cp38-win_amd64.whl |
| Python 3.9 | torchsparse-2.1.0-cp39-cp39-win_amd64.whl |
| Python 3.10 | torchsparse-2.1.0-cp310-cp310-win_amd64.whl |
| Python 3.11 | torchsparse-2.1.0-cp311-cp311-win_amd64.whl |
pip install [wheel_url_from_table_above]# Clone the Windows-compatible repository
git clone https://github.com/Deathdadev/torchsparse.git
cd torchsparse
# Install with build isolation disabled for better control
pip install . --no-build-isolation --verbosepip install git+https://github.com/Deathdadev/torchsparse.git@f1787ee --no-build-isolationTest your installation:
import torch
import torchsparse
# Basic functionality test
print(f"TorchSparse version: {torchsparse.__version__}")
print(f"CUDA available: {torch.cuda.is_available()}")
# Create a simple sparse tensor
coords = torch.randint(0, 10, (100, 4))
feats = torch.randn(100, 16)
sparse_tensor = torchsparse.SparseTensor(coords=coords, feats=feats)
print(f"Sparse tensor created successfully: {sparse_tensor.shape}")Solution: Install sparsehash using one of the methods above, then:
# If using vcpkg
$env:CMAKE_TOOLCHAIN_FILE = "C:\vcpkg\scripts\buildsystems\vcpkg.cmake"
# If using manual installation
$env:INCLUDE = "$env:INCLUDE;C:\sparsehash\src"Solutions:
- Increase virtual memory: Set page file to 16GB+
- Close other applications during compilation
- Use parallel compilation: Add
/MPflag - Build with reduced optimization: Use
/O1instead of/O2
# Set environment variable for reduced optimization
$env:CL = "/O1 /MP"Solutions:
- Restart your system to free memory
- Use pre-built wheels instead of building from source
- Build in parts: Install dependencies separately
If you get CUDA architecture errors:
# Check your GPU architecture
nvidia-smi
# Set appropriate CUDA architecture
$env:TORCH_CUDA_ARCH_LIST = "7.5;8.0;8.6;8.9" # Adjust based on your GPUFor advanced users who need custom compilation:
# Create setup_local.py with custom flags
import os
os.environ['CXXFLAGS'] = '/O1 /MP4' # Reduced optimization, 4 parallel jobs
os.environ['NVCCFLAGS'] = '-O2' # CUDA optimizationSet these for consistent builds:
$env:DISTUTILS_USE_SDK = "1"
$env:MSSdk = "1"
$env:TORCH_CUDA_ARCH_LIST = "7.5;8.0;8.6;8.9"
$env:FORCE_CUDA = "1"| Platform | Python Versions | CUDA Versions | PyTorch Versions |
|---|---|---|---|
| Windows 10/11 | 3.8, 3.9, 3.10, 3.11, 3.12 | 11.8, 12.1, 12.4 | 1.9.0 - 2.5.0 |
| Linux (Ubuntu 18.04+) | 3.8, 3.9, 3.10, 3.11, 3.12 | 11.1, 11.3, 11.6, 11.7, 11.8, 12.0, 12.1, 12.4 | 1.9.0 - 2.5.0 |
| CUDA Version | Compatible PyTorch Versions |
|---|---|
| 11.1 | 1.9.0, 1.9.1 |
| 11.3 | 1.9.0, 1.9.1 |
| 11.6 | 1.10.0, 1.11.0, 1.12.0, 1.13.0 |
| 11.7 | 1.13.0, 1.13.1 |
| 11.8 | 2.0.0, 2.0.1 |
| 12.0 | 2.1.0, 2.2.0, 2.3.0, 2.4.0 |
| 12.1 | 2.1.0, 2.2.0, 2.3.0, 2.4.0 |
| 12.4 | 2.4.0, 2.5.0 |
| GPU Series | Architecture | CUDA Compute Capability |
|---|---|---|
| RTX 20xx | Turing | 7.5 |
| RTX 30xx | Ampere | 8.6 |
| RTX 40xx | Ada Lovelace | 8.9 |
| Tesla V100 | Volta | 7.0 |
| Tesla A100 | Ampere | 8.0 |
| Platform | Required Tools |
|---|---|
| Windows | Visual Studio 2019/2022, MSVC v142/v143 |
| Linux | GCC 7.0+, build-essential, cmake |
If you encounter issues:
- Check the troubleshooting section above
- Use pre-built wheels if building from source fails
- Open an issue at GitHub Issues
- Include system information:
python -c "import torch; print(torch.__version__, torch.version.cuda)" nvcc --version
This Windows-compatible version includes:
- ✅ MSVC compatibility macros for inline assembly
- ✅ Fixed long/int64_t type mismatches
- ✅ Platform-specific compiler flags
- ✅ Sparsehash dependency resolution
- ✅ Memory optimization for Windows builds
For the latest updates, check the releases page.