Skip to content

Please publish the "optimized ONNX" model referenced in the paper (Section 7.1) and/or pytorch #4

@esteves023

Description

@esteves023

In the PDF included in the repository, Section 7.1 Real-World Deployment states:

“Our experiments show that DeepInfant V2 can operate in near-real-time on mobile devices when deployed via optimized ONNX or CoreML formats.”

The repo already ships the Core ML (*.mlmodel) artifacts, but an ONNX version (or the PyTorch weights needed to export one) is not present. This prevents:

Running the model on Android or in a browser (TF.js / ONNX.js).

Using the included predict.py, which expects an ONNX model.

Reproducing the “near-real-time” results on non-Apple hardware.

Could you please:

  1. Publish the optimized ONNX model for DeepInfant V2 (and, if possible, VGGish & AFP).

  2. Alternatively, provide the PyTorch checkpoints (*.pth) so the community can export to ONNX with torch.onnx.export.

  3. (Optional) Add a minimal export script or update the README to clarify the deployment workflow.

This would let researchers and parents without macOS/iOS evaluate DeepInfant exactly as described in the paper.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions