In this Hands-on-Lab, you will create an image classification service that runs in Snowpark Container Services. You will also create a file processing pipeline using streams, tasks, and a UDF.
- Part 1: local development/testing of the container
- Part 2: creating the SPCS service and image processing pipeline through the Snowsight UI.
The service that we will be running is a Flask API that utilizes a Tensorflow model to classify American Sign Language numbers from images. This service was written in Python, but it is import to know that SPCS can run containers with code written in any language, with any package. Awesome, right!?
We will start by running the code for the Flask server locally. First, clone this repository
git clone https://github.com/sfc-gh-tosmith/image-classification-spcs.git
then change into the directory created
cd your_new_directory_name
Next, create a conda environment by typing this in your terminal. If you do not have conda already installed, follow the install directions on the official conda website.
conda create -n "image-classification-spcs" python=3.9
Activate your new environment by entering
conda activate image-classification-spcs
Install the necessary dependencies by typing
conda install --yes --file requirements.txt
Test the flask app by running:
(Mac)
python3 app.py
(Windows)
python app.py
Go to localhost:5000/healthcheck and it should say "I'm ready"
Using an API request tool like Postman, build a POST request to localhost:5000/prediction. Copy the example_request.json file contents into the body of the request. Send it, and the response should look something like this:
{
"data": [
[
0,
"3"
]
]
}For part 2, follow the setup.sql file. In this file you will:
- Create a database, schema, and stages
- Create a compute pool to run your service on
- Build and push the docker container from your local machine into Snowflake
- Create the image classification service, running in Snowpark Container Services
- Create a function to interact with the running service
- Build the stream, UDF, and task for the file processing pipeline
- Use dynamic tables instead of a stream and task
- Access and process the image file directly in the container, rather than with the stream, task, UDF pipeline
This lab brings together content from the following sources: