Replies: 1 comment 1 reply
-
|
Hi @lamixer, great question! TypeWhisper can definitely help with this workflow - speech to text is the core feature, and it can process transcribed text through an LLM and even read back the response. Here's how your setup could work: Speech → Text → llama.cpp:
The return trip (TTS): So yes, you're looking in the right place! Hope that helps. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello! I have installed llama.cpp on a home ai server and I'd like to be able to talk to it instead of typing to it. I think I can configure my llama.cpp to behave like an OpenAI endpoint, so I think that I can talk to typewhisper-mac and it can talk to my llama. Right?
Then, how is the return trip made? Ideally, I'd have a choice of visual only or visual plus TTS (maybe via MacOS built-in mechanism) so I can 'converse' on my mac with my local llama machine.
Am I looking in the right place for this? Thank you!
Beta Was this translation helpful? Give feedback.
All reactions