Skip to content

Made RUNNING_ON_GPU.md point to CUDA 12.4#45

Merged
Lyrcaxis merged 2 commits intoLyrcaxis:mainfrom
Onkitova:patch-1
Aug 24, 2025
Merged

Made RUNNING_ON_GPU.md point to CUDA 12.4#45
Lyrcaxis merged 2 commits intoLyrcaxis:mainfrom
Onkitova:patch-1

Conversation

@Onkitova
Copy link
Contributor

Hello and thank you very much for great repo! I am eagerly awaiting every new release, while testing for my productions.

At the moment, GPU doc is misleading due to CUDA Toolkit (by link provided) updated to 13.0, which changed libraries names
image
leading to errors
Screenshot 2025-08-21 132554
image

Solution is to set link to archived version of CUDA Toolkit, like 12.4 (exact "12.4" is because it is also delivered with llama.cpp, for example).


However, saying just for myself, even after clean installing CUDA Toolkit 12.9 I still got errors if not manually putting Nvidia libraries next to KokoroSharp console application .exe in runtime.
image

Maybe there is a bug related to detection of CUDA libraries? Or what am I doing wrong, if you may to share?

Copy link
Owner

@Lyrcaxis Lyrcaxis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi and thanks for your contribution and your kind words.

Good call! Hm, it should be plug & play as soon as you install CUDA and cuDNN.
It's probably gonna be fixed with the updated ONNX package, though, so maybe give it a try first?

I haven't tried having multiple CUDA versions installed there, but maybbe that's what's causing the issue on your end?

Comment on lines +68 to +80

### Exact list of CUDA / CUDNN libraries nessesary for inference
- cudnn_engines_runtime_compiled64_9.dll
- cudnn_engines_precompiled64_9.dll
- cudnn_heuristic64_9.dll
- cudnn_graph64_9.dll
- cublasLt64_12.dll
- cudnn_adv64_9.dll
- cudnn_ops64_9.dll
- cublas64_12.dll
- cudart64_12.dll
- cufft64_11.dll
- cudnn64_9.dll
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be made collapsable somehow? Also, is this tested?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be made collapsable somehow?

Done.

Also, is this tested?

This is a bare minimum that allows to run cuda-inference in such a "portable" env.
Btw, do you have any warnings like this when running gpu-powered generation on your side? I have 4070, just one version (12.9) of cuda installed atm and -- outside of this weird cuda-blindness -- my system feels pretty stable.

Screenshot 2025-08-24 215024

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is super cool, great find!

I was wanting to poke at auto-library resolvement somehow for an App I'm making using KokoroSharp, so it's very useful info. Thanks!

Moved list under spoiler to reduce clutter
Copy link
Owner

@Lyrcaxis Lyrcaxis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your contribution :)

@Lyrcaxis Lyrcaxis changed the title Update RUNNING_ON_GPU.md Made RUNNING_ON_GPU.md point to CUDA 12.4 Aug 24, 2025
@Lyrcaxis Lyrcaxis merged commit 8905b14 into Lyrcaxis:main Aug 24, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants