Skip to content

Disabling cuda #1

@sistar2020

Description

@sistar2020

This does not work with Blackwell GPU.
Is there a way to disable cuda and use just cpu?

$ python predict.py -i ./example -o ./output/example -l ./logs/log.txt
/envs/swinsite/lib/python3.12/site-packages/torch/cuda/__init__.py:435: UserWarning:
    Found GPU0 NVIDIA RTX PRO 6000 Blackwell Workstation Edition which is of cuda capability 12.0.
    Minimum and Maximum cuda capability supported by this version of PyTorch is
    (5.0) - (9.0)

  queued_call()
/envs/swinsite/lib/python3.12/site-packages/torch/cuda/__init__.py:435: UserWarning:
    Please install PyTorch with a following CUDA
    configurations:  12.8 13.0 following instructions at
    https://pytorch.org/get-started/locally/

  queued_call()
/envs/swinsite/lib/python3.12/site-packages/torch/cuda/__init__.py:435: UserWarning:
NVIDIA RTX PRO 6000 Blackwell Workstation Edition with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA RTX PRO 6000 Blackwell Workstation Edition GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

  queued_call()
Start data preparation: example
Start prediction: example
example prediction:   0%|                                                                                                                                  | 0/3 [00:00<?, ?it/s][ERROR] Model inference failed for ('1tjw_A',): CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


example prediction:  33%|████████████████████████████████████████▋                                                                                 | 1/3 [00:01<00:03,  1.73s/it][ERROR] Model inference failed for ('1ygc_L',): CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


example prediction:  67%|█████████████████████████████████████████████████████████████████████████████████▎                                        | 2/3 [00:03<00:01,  1.49s/it][ERROR] Model inference failed for ('2g25_A',): CUDA error: no kernel image is available for execution on the device
Search for `cudaErrorNoKernelImageForDevice' in https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__TYPES.html for more information.
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


example prediction: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:06<00:00,  2.22s/it]
==> Finished processing example, Failed samples: 3

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions