Dear developer team,
Hello! Thank you for open-sourcing the VITA1.5 project, which is a very exciting omni project. Currently, I have noticed that the web_demo and audio inference implementations have been open-sourced, but the specific testing method for the three-source joint inference involving video, text, and audio is not yet clear.
To better understand and validate the project's functionalities, I would like to obtain a test script that includes video input and text instruction input. This will help us more comprehensively evaluate and test VITA1.5's performance in omni tasks.
If there are any related documents or example codes that can be provided, please let me know. Thank you very much for your support and assistance!
Looking forward to your reply.
Dear developer team,
Hello! Thank you for open-sourcing the VITA1.5 project, which is a very exciting omni project. Currently, I have noticed that the web_demo and audio inference implementations have been open-sourced, but the specific testing method for the three-source joint inference involving video, text, and audio is not yet clear.
To better understand and validate the project's functionalities, I would like to obtain a test script that includes video input and text instruction input. This will help us more comprehensively evaluate and test VITA1.5's performance in omni tasks.
If there are any related documents or example codes that can be provided, please let me know. Thank you very much for your support and assistance!
Looking forward to your reply.