-
Notifications
You must be signed in to change notification settings - Fork 110
Open
Description
Starting with the overview (https://ryzenai.docs.amd.com/en/latest/llm/overview.html), I do find that the documentation is confusing, pointing to a lot of places. Can you make this a 1-click / 1 command line to get set up, and detect the HW along the way?
To get where I needed, I followed this flow:
- Start with Ryzen AI LLM overview: https://ryzenai.docs.amd.com/en/latest/llm/overview.html
- HF hybrid models: https://huggingface.co/collections/amd/ryzenai-15-llm-hybrid-models-6859a64b421b5c27e1e53899
- Find a model card: https://huggingface.co/amd/Phi-3-mini-4k-instruct-awq-g128-int4-asym-fp16-onnx-hybrid; then it points to OGA docs
- OnnxRuntime GenAI (OGA) Flow: https://ryzenai.docs.amd.com/en/latest/hybrid_oga.html. Need to install:
- GPU driver
- Git for Windows
Notes: Confusing on whether I can just start from "C++ Program" or "Python Script" in documentation. I just skipped to "Python Script" but found out I needed to go back into the C++ Program. Do I just need to install the model from HF first, or do I need to copy all of the required DLL / txt / exe files?
Would rather not have to git clone the whole HF repo. Just would rather point to the model on HF and use the HF API.
Metadata
Metadata
Assignees
Labels
No labels