Replies: 1 comment 3 replies
-
|
Please try this code: from openai import OpenAI
from RealtimeTTS import TextToAudioStream, SystemEngine
engine = SystemEngine()
stream = TextToAudioStream(engine)
client = OpenAI(base_url="http://localhost:1234/v1")
def write(prompt: str):
for chunk in client.chat.completions.create(
model="qwen2-0.5b-instruct",
messages=[{"role": "user", "content": prompt}],
stream=True,
):
text_chunk = chunk.choices[0].delta.content
if text_chunk:
yield text_chunk
text_stream = write("A three-sentence relaxing speech.")
stream.feed(text_stream)
stream.play() |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What I am trying to do is run simple tts chat with loaded qwen2-0.5b-instruct at this moment I try to look and understand at test examples simple_llm_test.py
Maybe because its 2am and my brain cant think. Maybe I do have to create LMStudioEngine?
Any guidance will help.
Beta Was this translation helpful? Give feedback.
All reactions