Skip to content

Batch Inference? #54

@bunter96

Description

@bunter96

i am going to deploy it on cloud GPU but i have a question, does it supports batch inference or multiple/concurrent text to speeches in a single instance?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions