Skip to content

Error while using CoquiEngine #349

@freeriderzer0

Description

@freeriderzer0

Code i use

from RealtimeTTS import TextToAudioStream, CoquiEngine, SystemEngine
if __name__ == '__main__':
    engine = CoquiEngine() # replace with your TTS engine
    stream = TextToAudioStream(engine)
    while True:
        text = input('write something: ')
        stream.feed(text)
        stream.play()

Error

write something: hello
ERROR:root:Error during synthesis for text 'hello': 'GPT2InferenceModel' object has no attribute '_validate_assistant'
Traceback: Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/RealtimeTTS/engines/coqui_engine.py", line 721, in _synthesize_worker
    for i, chunk in enumerate(chunks):
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 36, in generator_context
    response = gen.send(None)
  File "/usr/local/lib/python3.10/dist-packages/TTS/tts/models/xtts.py", line 630, in inference_stream
    gpt_generator = self.gpt.get_generator(
  File "/usr/local/lib/python3.10/dist-packages/TTS/tts/layers/xtts/gpt.py", line 527, in get_generator
    return self.gpt_inference.generate_stream(
  File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "/usr/local/lib/python3.10/dist-packages/TTS/tts/layers/xtts/stream_generator.py", line 142, in generate
    self._validate_assistant(assistant_model, tokenizer, assistant_tokenizer)
  File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1931, in __getattr__
    raise AttributeError(
AttributeError: 'GPT2InferenceModel' object has no attribute '_validate_assistant'

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions