Skip to content

Inference: default to all GPUs#1122

Open
GeorgePearse wants to merge 1 commit intomainfrom
feat/auto-multigpu-default
Open

Inference: default to all GPUs#1122
GeorgePearse wants to merge 1 commit intomainfrom
feat/auto-multigpu-default

Conversation

@GeorgePearse
Copy link
Copy Markdown
Collaborator

Summary

  • Change init_detector(..., device=...) default from "cuda:0" to "cuda".
  • When device="cuda", auto-detect available GPUs and use all of them for single-process inference.
  • Log an info message: found X GPUs.

Testing

  • python3 -m pytest -q tests/test_runtime/test_inference_device_parsing.py

@github-actions
Copy link
Copy Markdown
Contributor

🤖 Multi-Model Consensus Review

Model Status
Claude Sonnet 4
GPT-4o
Gemini 2.0 Flash

tests/test_runtime/test_inference_device_parsing.py

Pass Rate: 3/3 models

visdet/apis/inference.py

Pass Rate: 3/3 models

@github-actions
Copy link
Copy Markdown
Contributor

Skylos Scan: No dead code or security issues detected.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant