Skip to content

Conversation

@miclegr
Copy link
Contributor

@miclegr miclegr commented Apr 16, 2025

By default pytorch pulls all the cuda dependencies, ending up with a ~5GB virtualenv created by pipx. If you want to use this as a neovim plugin you can have a much more lightweight installation by using cpu-only pytorch

@codecov
Copy link

codecov bot commented Apr 16, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.78%. Comparing base (a2f27fd) to head (9afe6b7).
Report is 1 commits behind head on main.

Additional details and impacted files
@@           Coverage Diff           @@
##             main      #75   +/-   ##
=======================================
  Coverage   97.78%   97.78%           
=======================================
  Files          18       18           
  Lines        1176     1176           
=======================================
  Hits         1150     1150           
  Misses         26       26           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@Davidyz
Copy link
Owner

Davidyz commented Apr 16, 2025

That's neat. If I remembered correctly, it's possible to specify index for a particular dependency. I'll see if I can make this a new dependency group in pyproject.toml so that users can just do pipx install vectorcode[cpu].

@Davidyz
Copy link
Owner

Davidyz commented Apr 16, 2025

Actually, would you say it's better to default to the cpu version for torch and add a gpu dep group? This'll make the default installation smaller.

@miclegr
Copy link
Contributor Author

miclegr commented Apr 16, 2025

I tried to do pipx install vectorcode[cpu] but it seems actually impossible, see
pdm-project/pdm#694
pytorch/pytorch#136275

the issue is that there's no torch[cpu], you need to change the pip index if you want cpu only torch. But with pdm you can only change pip index globally not per dependency group

@Davidyz
Copy link
Owner

Davidyz commented Apr 17, 2025

I see. Then let's stick to this approach until something better comes up. Thanks for your contribution!

@Davidyz Davidyz merged commit 96ae4bc into Davidyz:main Apr 17, 2025
11 checks passed
@Davidyz
Copy link
Owner

Davidyz commented Apr 19, 2025

I just realised that onnxruntime defaults to a CPU-only installation, with support for various optional GPU backends. What do you think of moving to onnxruntime as the default inference backend? This will likely reduce the default install size significantly without the hassle of setting environment variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants