Skip to content

Conversation

@AmitMY
Copy link
Contributor

@AmitMY AmitMY commented Apr 7, 2023

Fixes

Relates to #58

Description

Since GPUs are highly parallelizable, could we perform inference on multiple frames at once instead of one-by-one?

Yes, we can! However, it does not improve performance much if the GPU is weak.

Benchmark on Macbook Pro with M1 Max, shows no substantial improvement from batching. We observe a linear scale by batch size.

Batch Size Fastest (ms) Slowest (ms)
1 52.4 91.2
2 104.2 120.5
3 157.1 165.0
5 262.5 267.3
6 310.5 318.0

@coveralls
Copy link

coveralls commented Apr 7, 2023

Pull Request Test Coverage Report for Build 4641016751

  • 0 of 8 (0.0%) changed or added relevant lines in 1 file are covered.
  • 2 unchanged lines in 2 files lost coverage.
  • Overall coverage increased (+0.6%) to 53.211%

Changes Missing Coverage Covered Lines Changed/Added Lines %
src/app/pages/translate/pose-viewers/human-pose-viewer/human-pose-viewer.component.ts 0 8 0.0%
Files with Coverage Reduction New Missed Lines %
src/app/core/services/assets/assets.service.ts 1 39.66%
src/app/pages/translate/pose-viewers/human-pose-viewer/human-pose-viewer.component.ts 1 16.67%
Totals Coverage Status
Change from base Build 4618504967: 0.6%
Covered Lines: 1115
Relevant Lines: 2003

💛 - Coveralls

@github-actions
Copy link
Contributor

github-actions bot commented Apr 7, 2023

Visit the preview URL for this PR (updated for commit 5d9fe2e):

https://translate-sign-mt--pr83-pix2pix-batching-q8o1qeuz.web.app

(expires Fri, 14 Apr 2023 20:17:00 GMT)

🔥 via Firebase Hosting GitHub Action 🌎

Sign: 739446cfe7a349700ebd347d2a39e3b90ba24425

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants