Skip to content

Can inference latency be further compressed? #28

@Andyyoung0507

Description

@Andyyoung0507

Hi, it is a nice work! I am interested in it. I tried to mask the interested object in the picture of my own using OpenSeed. But it takes around 90 seconds! My CPU is Intel Xeon Processor and the GPU is Tesla V100S. So I want to know is there some methods to accelerate the inference latency?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions