In the PDF included in the repository, Section 7.1 Real-World Deployment states:
“Our experiments show that DeepInfant V2 can operate in near-real-time on mobile devices when deployed via optimized ONNX or CoreML formats.”
The repo already ships the Core ML (*.mlmodel) artifacts, but an ONNX version (or the PyTorch weights needed to export one) is not present. This prevents:
Running the model on Android or in a browser (TF.js / ONNX.js).
Using the included predict.py, which expects an ONNX model.
Reproducing the “near-real-time” results on non-Apple hardware.
Could you please:
-
Publish the optimized ONNX model for DeepInfant V2 (and, if possible, VGGish & AFP).
-
Alternatively, provide the PyTorch checkpoints (*.pth) so the community can export to ONNX with torch.onnx.export.
-
(Optional) Add a minimal export script or update the README to clarify the deployment workflow.
This would let researchers and parents without macOS/iOS evaluate DeepInfant exactly as described in the paper.