Models should be exportable in TensorRT compatible formats for performance improvement and inference serving. Needs demonstration in example notebook.