The ML Field Planner is a framework for analyzing ML pipelines and studying edge-to-center tradeoffs regarding function placement of ML. Using ML Field Planner, researchers configure experiments to run on real IoT hardware, configure machine learning models to analyze custom benchmark datasets, and experiment with different algorithm configurations, such as storage compression, all from a graphical user interface.
Please cite the following paper if you use this tool in your research: Joe Stubbs, Sowbaranika Balasubramaniam, Samuel Khuvis, Sachith Withana, Manikya Swathi Vallabhajosyula, Richard Cardone, Christian Garcia, Nathan Freeman, Carlos Guzman, Beth Plale, Rajiv Ramnath, and Tanya Berger-Wolf. 2025. ML Field Planner: Analyzing and Optimizing ML Pipelines For Field Research. In Practice and Experience in Advanced Research COMPuting 2025: The Power of Collaboration (PEARC '25). Association for COMPuting Machinery, New York, NY, USA, Article 8, 1–9. https://doi.org/10.1145/3708035.3736013
- Software
- CI4AI
- Animal Ecology
The ML Field Planner is a framework consisting of the following software components:
TapisBase: Base ICICLE Tapis Softwarectcontroller: Hardware and Software ProvisionerEventEngine: Event EngineCameraTrapsEdgeSoftware: Camera Traps Edge SoftwarePatraKG: Patra Model Card Knowledge GraphPatraToolkit: Patra Model Card Toolkit.CKN: Cyberinfrastructure Knowledge NetworkFederatedAuthService: Tapis Federated Authentication ServiceTapisUI: Tapis User Interfaceicicleai-tapisui-extension: ICICLEAI TapisUI ExtensionCameraTrapsEdgeSimDashboard: Camera Traps Edge Simulator Dashboard
The ML Field Planner provides an authenticated framework to submit ML pipelines to edge and cloud devices and analyze the results to make decisions on edge-to-center tradeoffs and text new algorithms.
The planner is powered by the Tapis framework [TapisBase], which provides an authenticated environment [FederatedAuthService], including the Camera Traps Edge Simulator Dashboard, a graphical user interface [TapisUI, icicleai-tapisui-extension, CameraTrapsEdgeSimDashboard] to submit jobs, as well as a dashboard to view job metrics (CKN).
Once the user selects the hardware, model, and dataset and submits the analysis run from the Camera Traps Edge Simulator Dashboard, a Tapis job is generated. This launches the hardware and software provisioner [CTController] on a backend node, which handles the provisioning of the hardware, setup and running of the ML pipeline, and shutting down the hardware.
The ML pipeline is launched from the dashboard is built using the Event Engine [EventEngine], which allows plugins to communicate with each other over zmq sockets. [CameraTrapsEdgeSoftware] is a set of plugins deployed a docker container that communicate across the Event Engine. When provided with a set of images or a prerecorded video, it can be run in simulation mode on the provisioned hardware to simulate a real ML-enabled camera trap.
As part of the setup that ctcontroller does to prepare the provisioned hardware for the pipeline, it sends a request to the [PatraKG] using the model id specified by the user, parsing the model card to obtain download model to the local device. New model cards can can created and added to the the Patra model card toolkit [PatraToolkit].
As the ML pipeline runs, a CKN daemeon streams metric data from the local to device to a CKN broker running on a backend node [CKN], including model and system performance, viewable from a dashboard in the graphical user interface [TapisUI, icicleai-tapisui-extension].
The ICICLE instance of the ML Field Planner is accessible here.
- You can either login with your TACC account or via CILogin.

- Once logged in, navigate to the Camera Traps Edge Simulator Dashboard by selected
ML Edgeon the left side menu and then clicking on theGo to Analysis Environmentbox.
- To prepare an analysis run, first select a select. You can either choose a model from the dropdown menu or click the
provide model idbutton above the dropdown. We currently only support Patra model card IDs, you can find a list of models by navigating toML Hub-->modelsand choosingPatraas thePlatform.
- After the model has been selected, choose a dataset to run the model against. There is a button to choose between a video or image dataset. For both, we provide example datasets, but if you would like to test against your own data, select
provide dataset idand provide a url to your dataset. - Next, select a site to run the experiment from the dropdown. The ICICLE deployment of ML Field Planner has access to hardware at two sites, Chameleon and TACC, each with different types of hardware available.
- After selecting the site, select the type of hardware to run on.
- Finally, to run the ML pipeline with any advanced features provide a JSON in the
Advanced Configtext field. For a list of features supported by the Camera Traps Edge Software, see the README. - Click the
Analyzebutton to being the analysis.
- You should see your analyses, including this one experiments under the submission button in the
Analysestable. From the table, you can view the status of the job and even jump to an archive of the run directory. Note theUUIDof the experiment.
- To view metrics captured by the CKN, select the
CKN Dashboardon the left menu and then selectCamera Traps. Choose your username in the first dropdown and then choose your experiment UUID in the second dropdown.
This work has been funded by grants from the National Science Foundation, including the ICICLE AI Institute (OAC 2112606) and Tapis (OAC 1931439).
