Problem
All 8 vendor backends (Samsung, Intel, AMD, Qualcomm, Arm-Ethos, CEVA, Rockchip, MediaTek) are pass-through wrappers that delegate to ONNX or TFLite and add a diagnostic message string. For example:
// crates/nxpu-backend-intel/src/lib.rs:30-36
let mut output = OnnxBackend.compile(module, opts)?;
output.diagnostics.push(Diagnostic {
message: "Load in OpenVINO: ov::Core::read_model(\"output.onnx\")".into(),
});
No vendor SDK is actually invoked. No vendor-specific binary format is produced.
Requirements
Pick at least one vendor and implement actual SDK integration:
Recommended: Arm Ethos-U (via Vela)
- Invoke
vela CLI tool on the emitted TFLite model
- Parse Vela output for optimized
.tflite with Ethos-U custom ops
- Return the Vela-optimized binary as
BackendOutput
Alternative: Intel OpenVINO
- Use OpenVINO C API (via FFI) or invoke
mo (Model Optimizer) CLI
- Return the compiled OpenVINO IR (
.xml + .bin)
Alternative: Qualcomm QNN
- Invoke
qnn-onnx-converter + qnn-net-run CLI tools
- Return the QNN model library (
.so)
Impact
Without real vendor integration, the transpiler cannot produce artifacts that run on actual NPU hardware. The entire purpose of the project is blocked.
Problem
All 8 vendor backends (Samsung, Intel, AMD, Qualcomm, Arm-Ethos, CEVA, Rockchip, MediaTek) are pass-through wrappers that delegate to ONNX or TFLite and add a diagnostic message string. For example:
No vendor SDK is actually invoked. No vendor-specific binary format is produced.
Requirements
Pick at least one vendor and implement actual SDK integration:
Recommended: Arm Ethos-U (via Vela)
velaCLI tool on the emitted TFLite model.tflitewith Ethos-U custom opsBackendOutputAlternative: Intel OpenVINO
mo(Model Optimizer) CLI.xml+.bin)Alternative: Qualcomm QNN
qnn-onnx-converter+qnn-net-runCLI tools.so)Impact
Without real vendor integration, the transpiler cannot produce artifacts that run on actual NPU hardware. The entire purpose of the project is blocked.