Skip to content

Kora-Scenes/ML_Ops_Pipeline

Repository files navigation

Toy ML Ops Pipeline

pylint workflow pypi workflow pytest workflow

A simple demonstration of an ML Ops pipeline involving three stages:

  1. Data Ingestion
  2. Model Training
  3. Model Analysis

To add your own pipeline, model, datasets, etc., take a look at pipelines/README.md

Getting started

Download and unzip the karthika95-pedestrian-detection kaggle dataset to ~/Downloads/karthika95-pedestrian-detection/.

Data ingestion can be run with the following. It will validate the dataset and store it to data/.

python data_ingestion.py --input_dir ~/Downloads/karthika95-pedestrian-detection/ --pipeline_name obj_det --interpreter_name karthika95-pedestrian-detection

python3 data_ingestion.py --input_dir ~/datasets/klemenko-kitti-dataset/ --pipeline_name obj_det --interpreter_name KITTI_lemenko_interp

To view logs

watch -n 1 "wget -qO-  http://bani-c-0069l.ban.apac.bosch.com:8081/open/logs/stdout_main_git.log | tail"

Running as a systemd service

The file mlops.service is to be copied to /etc/systemd/system/. The service can then be started and status can be checked on using the following commands.

sudo cp mlops.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl start mlops.service
sudo systemctl status mlops.service

To view the logs of a specific subprocess, use the tmux script

./logs.sh
# OR
tmux source-file mlops.tmux

Depth Perception Demo

Download Airsim Dataset

python data_ingestion.py \
	--input_dir ~/Downloads/2022-05-22-11-10-49 \
	--pipeline_name depth_det \
	--interpreter_name depth_interp_airsim

Web UI

Start the REST API server

FLASK_APP=rest_server.py FLASK_ENV=development flask run

Start the web UI

cd mlops-react-dashboard
yarn install
yarn start

Install this specific version of browsepy

python -m pip install git+https://github.com/AdityaNG/browsepy.git@galary_support

Run streamlit web UI

streamlit run web_ui.py

Code Quality

Static Code Analysis

python -m pylint *.py
python -m pycodestyle *.py

Unit Testing

Run the unit tests

python -m pytest --import-mode=append tests/

MODELS:

Detecron2 folder:

Facebook's detectron2 model that has the following models:
	Faster RCNN object detection model
	Mask RCNN, Pointrend image segmentation models

Pointrend folder:

Pointrend model code, 
Create seperate Python3 environment to run.

Vanilla Mask RCNN:

Mask RCNN can be cloned officially from https://github.com/matterport/Mask_RCNN 
Then clone My implementation code https://github.com/NikhilAdyapak/ImageSegmentation/tree/main/MRCNN
Create seperate Python 3.7 environment to run.

Yolov3 folder:

My implementation using yolov3
Download yolov3.weights from https://pjreddie.com/media/files/yolov3.weights 

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •