Skip to content

ICICLE-ai/mlfieldplanner

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ML Field Planner

The ML Field Planner is a framework for analyzing ML pipelines and studying edge-to-center tradeoffs regarding function placement of ML. Using ML Field Planner, researchers configure experiments to run on real IoT hardware, configure machine learning models to analyze custom benchmark datasets, and experiment with different algorithm configurations, such as storage compression, all from a graphical user interface.

Please cite the following paper if you use this tool in your research: Joe Stubbs, Sowbaranika Balasubramaniam, Samuel Khuvis, Sachith Withana, Manikya Swathi Vallabhajosyula, Richard Cardone, Christian Garcia, Nathan Freeman, Carlos Guzman, Beth Plale, Rajiv Ramnath, and Tanya Berger-Wolf. 2025. ML Field Planner: Analyzing and Optimizing ML Pipelines For Field Research. In Practice and Experience in Advanced Research COMPuting 2025: The Power of Collaboration (PEARC '25). Association for COMPuting Machinery, New York, NY, USA, Article 8, 1–9. https://doi.org/10.1145/3708035.3736013

  • Software
  • CI4AI
  • Animal Ecology

Explanation

Software COMPonents

The ML Field Planner is a framework consisting of the following software components:

Architectural Overview

ML Field Planner Architecture Diagram

The ML Field Planner provides an authenticated framework to submit ML pipelines to edge and cloud devices and analyze the results to make decisions on edge-to-center tradeoffs and text new algorithms.

The planner is powered by the Tapis framework [TapisBase], which provides an authenticated environment [FederatedAuthService], including the Camera Traps Edge Simulator Dashboard, a graphical user interface [TapisUI, icicleai-tapisui-extension, CameraTrapsEdgeSimDashboard] to submit jobs, as well as a dashboard to view job metrics (CKN).

Once the user selects the hardware, model, and dataset and submits the analysis run from the Camera Traps Edge Simulator Dashboard, a Tapis job is generated. This launches the hardware and software provisioner [CTController] on a backend node, which handles the provisioning of the hardware, setup and running of the ML pipeline, and shutting down the hardware.

The ML pipeline is launched from the dashboard is built using the Event Engine [EventEngine], which allows plugins to communicate with each other over zmq sockets. [CameraTrapsEdgeSoftware] is a set of plugins deployed a docker container that communicate across the Event Engine. When provided with a set of images or a prerecorded video, it can be run in simulation mode on the provisioned hardware to simulate a real ML-enabled camera trap.

As part of the setup that ctcontroller does to prepare the provisioned hardware for the pipeline, it sends a request to the [PatraKG] using the model id specified by the user, parsing the model card to obtain download model to the local device. New model cards can can created and added to the the Patra model card toolkit [PatraToolkit].

As the ML pipeline runs, a CKN daemeon streams metric data from the local to device to a CKN broker running on a backend node [CKN], including model and system performance, viewable from a dashboard in the graphical user interface [TapisUI, icicleai-tapisui-extension].


How-To Guide

How-To Run on the ICICLE Instance

The ICICLE instance of the ML Field Planner is accessible here.

  1. You can either login with your TACC account or via CILogin. Login
  2. Once logged in, navigate to the Camera Traps Edge Simulator Dashboard by selected ML Edge on the left side menu and then clicking on the Go to Analysis Environment box. Edge simulation
  3. To prepare an analysis run, first select a select. You can either choose a model from the dropdown menu or click the provide model id button above the dropdown. We currently only support Patra model card IDs, you can find a list of models by navigating to ML Hub --> models and choosing Patra as the Platform. ML Hub
  4. After the model has been selected, choose a dataset to run the model against. There is a button to choose between a video or image dataset. For both, we provide example datasets, but if you would like to test against your own data, select provide dataset id and provide a url to your dataset.
  5. Next, select a site to run the experiment from the dropdown. The ICICLE deployment of ML Field Planner has access to hardware at two sites, Chameleon and TACC, each with different types of hardware available.
  6. After selecting the site, select the type of hardware to run on.
  7. Finally, to run the ML pipeline with any advanced features provide a JSON in the Advanced Config text field. For a list of features supported by the Camera Traps Edge Software, see the README.
  8. Click the Analyze button to being the analysis. Example anaysis form
  9. You should see your analyses, including this one experiments under the submission button in the Analyses table. From the table, you can view the status of the job and even jump to an archive of the run directory. Note the UUID of the experiment. Submitted analysis jobs
  10. To view metrics captured by the CKN, select the CKN Dashboard on the left menu and then select Camera Traps. Choose your username in the first dropdown and then choose your experiment UUID in the second dropdown. CKN dashboard

Acknowledgements

This work has been funded by grants from the National Science Foundation, including the ICICLE AI Institute (OAC 2112606) and Tapis (OAC 1931439).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •