-
Notifications
You must be signed in to change notification settings - Fork 175
Open
Description
Hi, I am running the standard PyTorch TAPIR model on GPU but noticed that the inference speed is significantly slower than when using cpu. This seems backwards. I am running on 400 frames with only 2 query points. The frame size is 512x512x3. I see that it is in fact running on the GPU due to the usage percentage increasing and the memory increases as well. I am also running on a NVIDIA GeForce RTX 2080. Any guidance on what could be happening here?
CPU: ~325s to complete inference
GPU: ~1025s to complete inference
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels