Skip to content

This version of PyTorch is not compatible with RTX 5000 GPUs #11

@freemansoft

Description

@freemansoft

AI Workbench version: "latest" ? don't see an about panel. installed via remote command line today.
Ubuntu: 24.04
Video Card: 5060 TI

Saw this same problem in one of the NIMs, but AI Workbench team owns this project and not the NIM club.

-- MODELS: Loading Model stabilityai/stable-diffusion-xl-base-1.0 ---

Fetching 19 files:   0%|          | 0/19 [00:00<?, ?it/s]
Fetching 19 files:  11%|█         | 2/19 [00:00<00:05,  2.96it/s]
Fetching 19 files:  21%|██        | 4/19 [00:59<04:20, 17.38s/it]
Fetching 19 files:  32%|███▏      | 6/19 [01:02<02:12, 10.19s/it]
Fetching 19 files:  84%|████████▍ | 16/19 [01:10<00:09,  3.01s/it]
Fetching 19 files: 100%|██████████| 19/19 [01:10<00:00,  3.72s/it]

Loading pipeline components...:   0%|          | 0/7 [00:00<?, ?it/s]
Loading pipeline components...:  14%|█▍        | 1/7 [00:00<00:01,  5.00it/s]
Loading pipeline components...:  29%|██▊       | 2/7 [00:00<00:00,  5.01it/s]
Loading pipeline components...:  86%|████████▌ | 6/7 [00:00<00:00, 13.73it/s]
Loading pipeline components...: 100%|██████████| 7/7 [00:00<00:00,  8.84it/s]
/home/workbench/.local/lib/python3.10/site-packages/torch/cuda/__init__.py:235: UserWarning: 

NVIDIA GeForce RTX 5060 Ti with CUDA capability sm_120 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_50 sm_60 sm_70 sm_75 sm_80 sm_86 sm_90.
If you want to use the NVIDIA GeForce RTX 5060 Ti GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
  warnings.warn(
INFO:httpx:HTTP Request: GET https://checkip.amazonaws.com/ "HTTP/1.1 200 "
INFO:httpx:HTTP Request: GET https://api.gradio.app/pkg-version "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: GET http://localhost:8080/startup-events "HTTP/1.1 200 OK"
INFO:httpx:HTTP Request: HEAD http://localhost:8080/ "HTTP/1.1 200 OK"
INFO:matplotlib.font_manager:generated new fontManager
--- MODELS: Configuring Pipe ---
--- MODELS: Model is ready for inference ---
http://localhost:8000
IMPORTANT: You are using gradio version 4.35.0, however version 4.44.1 is available, please upgrade.
--------
Running on local URL:  http://0.0.0.0:8080
To create a public link, set `share=True` in `launch()`.


Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions