Skip to content

daemon-labs-io/docker-local-aws-lambda

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

12 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Learn to build and develop AWS Lambda functions locally with Docker

This guide walks you through setting up a robust local Serverless development environment using Docker, AWS Lambda, TypeScript, and LocalStack.
It focuses on emulating the cloud runtime entirely offline, optimising production images with multi-stage builds, and mocking external services like S3 to create a complete, cost-free development workflow.


πŸ›‘ Prerequisites

General/global prerequisites

Before beginning this workshop, please ensure your environment is correctly set up by following the instructions in our prerequisites documentation:

➑️ Prerequisites guide

Load Docker images

Caution

This only works when attending a workshop in person.
Due to having a number of people trying to retrieve Docker images at the same time, this allows for a more efficient way.

If you are NOT in an in-person workshop, continue to the workshop, Docker images will be pulled as needed.

Once the facilitator has given you an IP address, open http://<IP-ADDRESS>:8000 in your browser.

When you see the file listing, download the workshop-images.tar file.

Warning

Your browser may block the download initially, when prompted, allow it to download.

Run the following command:

docker load -i ~/Downloads/workshop-images.tar

Validate Docker images

Run the following command:

docker images

Note

You should now see four images listed.

$ docker images
REPOSITORY                     TAG       IMAGE ID       CREATED        SIZE
localstack/localstack          latest    de4d3256398a   25 hours ago   1.17GB
public.ecr.aws/lambda/python   3.14      983ca119258a   3 days ago     584MB
public.ecr.aws/lambda/nodejs   24        30d41baede74   3 days ago     449MB
curlimages/curl                latest    26c487d15124   2 weeks ago    24.5MB

Image IDs, created and sizes may vary.


1. The foundation

Goal: Get a working container environment running.

Create project folder

Create a new folder for your project:

mkdir -p ~/Documents/daemon-labs/docker-aws-lambda

Note

You can either create this via a terminal window or your file explorer.

Open the new folder in your code editor

Tip

If you are using VSCode, we can now do everything from within the code editor.
You can open the terminal pane via Terminal -> New Terminal.

Create the code subdirectory

We keep our application code separate from infrastructure config.

mkdir ./nodejs

Create the Dockerfile

Create the file at nodejs/Dockerfile (inside the subdirectory).

FROM public.ecr.aws/lambda/nodejs:24

Create docker-compose.yaml

Create this file in the root of your project.

services:
  lambda:
    build: ./nodejs

Run the initial build

Run the following command:

docker compose build

Note

At this stage, if you loaded the Docker images as part of the prerequisites, if you run docker images you should see the lambda/nodejs image as well as a new image which is the same size.

Initialise the container

Run this command to start an interactive shell:

docker compose run -it --rm --entrypoint /bin/sh -v ./nodejs:/var/task lambda

Warning

Due to AWS not creating multi-platform images we need to start an interactive shell rather than passing in commands.
For example, if we were to run the following command:

docker compose run -it --rm --entrypoint /bin/sh -v ./nodejs:/var/task lambda node --version

In some cases, we would receive the error /var/lang/bin/node: /var/lang/bin/node: cannot execute binary file.

Image check

Run the following command:

node --version

Note

The output should start with v24 followed by the latest minor and patch version.


2. The application

Goal: Initialise a TypeScript Node.js project.

Initialise the project

Inside the container shell:

npm init -y

Note

Notice how the nodejs/package.json file is automatically created on your host machine due to the volume mount.

Install dependencies

npm add --save-dev @types/node@24 @types/aws-lambda @tsconfig/recommended typescript

Note

Notice this automatically creates a nodejs/package-lock.json file as well as the nodejs/node_modules directory.

Exit the container

exit

Note

At this stage, we no longer need the interactive shell and can return to the code editor. Even though dependencies have been installed, if you run docker images again, you'll see the image size hasn't changed because the node_modules were written to your local volume, not via an image layer.

Configure TypeScript

Create nodejs/tsconfig.json locally:

{
  "extends": "@tsconfig/recommended/tsconfig.json",
  "compilerOptions": {
    "outDir": "./build"
  }
}

Note

While you could auto-generate this file, our manual configuration using a recommended preset keeps the file minimal and clean.

Create the handler

Create nodejs/src/index.ts:

import { Handler } from "aws-lambda";

export const handler: Handler = async (event, context) => {
  console.log({ event, context });

  return {
    statusCode: 200,
    body: { event, context },
  };
};

Add build script

Update nodejs/package.json scripts:

"build": "tsc"

Note

At this stage we have the main building blocks for the application, but our runtime doesn't know what to do with them.


3. The runtime

Goal: Make the container act like a real Lambda server.

Add .dockerignore

Create nodejs/.dockerignore (inside the subdirectory):

build
node_modules

Note

We're making sure that no matter where we're building the image it never loads in any built files or local node_modules.
That way, whenever we're building it is done in an identical way and reduces the possibility of "it worked on my machine".

Update Dockerfile

Update nodejs/Dockerfile:

FROM public.ecr.aws/lambda/nodejs:24

COPY ./ ${LAMBDA_TASK_ROOT}

RUN npm ci && npm run build

CMD [ "build/index.handler" ]

Run the following command:

docker compose build

Note

As we're now doing the dependency install as part of the build, when you run docker images you'll notice our Docker image has increased in size.

$ docker images
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE
your-lambda                    latest    05b92630088f   3 seconds ago   483MB
public.ecr.aws/lambda/nodejs   24        30d41baede74   3 days ago      449MB

Tip

When running docker images you might notice that you have got a dangling image that looks a bit like this:

$ docker images
REPOSITORY                     TAG       IMAGE ID       CREATED         SIZE
your-lambda                    latest    05b92630088f   3 seconds ago   483MB
public.ecr.aws/lambda/nodejs   24        30d41baede74   3 days ago      449MB
<none>                         <none>    17e6c55f785f   3 days ago      449MB

When you rebuilt the image, Docker moved the "nametag" to your new version, leaving the old version behind as a nameless orphan.

Any dangling images can be cleaned with the following command:

docker image prune

Update Lambda healthcheck

Update docker-compose.yaml:

lambda:
  build: ./nodejs
  healthcheck:
    test:
      - CMD
      - curl
      - -I
      - http://localhost:8080
    interval: 1s
    timeout: 1s
    retries: 30

Tip

The healthcheck allows Docker (and us) to know when a container is up and running as expected.
If you were to run docker compose up and then run docker ps in a different terminal window while our containers were starting up you might see the following:

$ docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED        STATUS                                     PORTS     NAMES
bf2696aeaabf   your-lambda       "/lambda-entrypoint.…"   1 second ago   Up Less than a second (health: starting)             your-lambda-1

If you ran docker ps once the container was able to pass the healthcheck you would hopefully see the following:

$ docker ps
CONTAINER ID   IMAGE             COMMAND                  CREATED          STATUS                    PORTS     NAMES
bf2696aeaabf   your-lambda       "/lambda-entrypoint.…"   36 seconds ago   Up 35 seconds (healthy)             your-lambda-1

If the container wasn't able to pass the healthcheck then you would eventually see unhealthy instead.

If you did run docker compose up you will need press Ctrl+C on your keyboard to exit the container.

Add cURL service

Update docker-compose.yaml (in the root) to include a service that triggers our Lambda.

services:
  curl:
    image: curlimages/curl
    depends_on:
      lambda:
        condition: service_healthy
    command:
      - -s
      - -d {}
      - http://lambda:8080/2015-03-31/functions/function/invocations
  # ... existing config

Note

As we have the healthcheck in place, we can actually tell the curl container not to start until it gets that healthy response.

Run the stack

Run the following command:

docker compose up

Warning

The problem with this specific command is that the Lambda continues to run despite the cURL container running and exiting.
Exit your container by pressing Ctrl+C on your keyboard.

Tell Docker to terminate the containers

Run the following command:

docker compose up --abort-on-container-exit

Tip

With this extra attribute, we've told Docker to terminate all other running containers when one exits.


4: Developer experience

Goal: Simulate real-world events and environments.

Add environment variables

Update docker-compose.yaml:

services:
  # ... existing config
  lambda:
    # ... existing config
    environment:
      AWS_LAMBDA_FUNCTION_MEMORY_SIZE: 128
      AWS_LAMBDA_FUNCTION_TIMEOUT: 3
      AWS_LAMBDA_LOG_FORMAT: JSON

Check the updated values

Run the following command:

docker compose up --abort-on-container-exit

Note

On this execution you'll be able to confirm two of the values are working.
Find the Lambda REPORT log and you'll now see Memory Size and Max Memory Used are set to 128 MB instead of the previous 3008 MB.
Find the log for event and context and you'll see it has now switched to a JSON structured log rather than just broken text.

Check the timeout

Update docker-compose.yaml:

AWS_LAMBDA_FUNCTION_TIMEOUT: 0

Run the following command:

docker compose up --abort-on-container-exit

Note

On this execution you'll see that the curl container received Task timed out after 0.00 seconds.
Find the Lambda REPORT again and you'll see Init Duration, Duration and Billed Duration are all set to 0 ms.

Be sure to set AWS_LAMBDA_FUNCTION_TIMEOUT back to 3 now.

Create the events subdirectory

Create the events subdirectory in the root (keep events outside the code folder):

mkdir ./events

Create a custom event file

Create events/custom.json:

{
  "user": "Alice"
}

Create API Gateway event file

Create events/api-gateway.json:

{
  "resource": "/",
  "path": "/",
  "httpMethod": "POST",
  "body": "{\"user\": \"Alice\"}",
  "isBase64Encoded": false
}

Note

Lambdas can technically receive any payload, but can also be invoked from other AWS services, so it is very useful to replicate this as much as possible.

Inject the event

Update docker-compose.yaml:

services:
  curl:
    # ... existing config
    command:
      - -s
      - -d
      - ${LAMBDA_INPUT:-{}}
      - http://lambda:8080/2015-03-31/functions/function/invocations
    volumes:
      - ./events:/events:ro
  # ... existing config

Test with data

docker compose up --abort-on-container-exit
LAMBDA_INPUT=@/events/custom.json docker compose up --abort-on-container-exit
LAMBDA_INPUT=@/events/api-gateway.json docker compose up --abort-on-container-exit

Note

With each of these commands, you'll notice that the curl container receives a slightly different response where the event changes.
The first command we didn't include the LAMBDA_INPUT attribute, so you docker-compose.yaml default the input to {}.

Add a new log

Update nodejs/src/index.ts to include a new log:

import { Handler } from "aws-lambda";

export const handler: Handler = async (event, context) => {
  console.log("Hello world!");
  console.log({ event, context });

  return {
    statusCode: 200,
    body: { event, context },
  };
};

Run the following command:

docker compose up --abort-on-container-exit

Warning

Where's the log? Nothing has actually updated.
As we're running the containers and stopping them each time, we need to let Docker know about any changes.

Run the following command:

docker compose up --abort-on-container-exit --build

Note

Now, each time we run the containers, Docker is re-building everything and picking up any new changes.

Tip

Even though Docker is technically re-building each and every time, if there are no new changes, Docker will use cached layers resulting in faster executions.


5. Optimisation

Goal: Prepare for production with improved caching and multi-stage builds.

Improved caching

Replace nodejs/Dockerfile with this cached optimised version:

FROM public.ecr.aws/lambda/nodejs:24

COPY ./package*.json ${LAMBDA_TASK_ROOT}

RUN npm ci

COPY ./ ${LAMBDA_TASK_ROOT}

RUN npm run build

CMD [ "build/index.handler" ]

Run the following command:

docker compose up --abort-on-container-exit --build

Tip

In this iteration, as npm ci and npm run build are two different layers, when one changes it doesn't impact the other.
For example, if we update our code without updating any packages, the npm ci can still use it's cached version where as the npm run build will get rebuilt.

=> CACHED [2/5] COPY ./package*.json /var/task
=> CACHED [3/5] RUN npm ci
=> [4/5] COPY ./ /var/task
=> [5/5] RUN npm run build

Multi-stage build

Replace nodejs/Dockerfile with this optimised version:

FROM public.ecr.aws/lambda/nodejs:24 AS base

FROM base AS builder

COPY ./package*.json ${LAMBDA_TASK_ROOT}

RUN npm ci

COPY ./ ${LAMBDA_TASK_ROOT}

RUN npm run build

FROM base

COPY --from=builder ${LAMBDA_TASK_ROOT}/package*.json ${LAMBDA_TASK_ROOT}

RUN npm ci --only=production

COPY --from=builder ${LAMBDA_TASK_ROOT}/build ${LAMBDA_TASK_ROOT}/build

CMD [ "build/index.handler" ]

Run the following command:

docker compose up --abort-on-container-exit --build

Note

In this iteration, our built image only includes the files needed to actually be executed.
This means our Docker images has a reduced size but also any potential security risk of the development dependencies are removed.


6: Advanced integration

Goal: Connect to LocalStack.

Add LocalStack service to docker-compose.yaml

services:
  # ... existing config
  localstack:
    image: localstack/localstack
    healthcheck:
      test:
        - CMD
        - curl
        - -I
        - http://localhost:4566/_localstack/health
      interval: 1s
      timeout: 1s
      retries: 30

Update Lambda config

Update docker-compose.yaml:

services:
  # ... existing config
  lambda:
    # ... existing config
    depends_on:
      localstack:
        condition: service_healthy
    environment:
      AWS_LAMBDA_FUNCTION_MEMORY_SIZE: 128
      AWS_LAMBDA_FUNCTION_TIMEOUT: 3
      AWS_LAMBDA_LOG_FORMAT: JSON
      AWS_ENDPOINT_URL: http://localstack:4566
      AWS_SECRET_ACCESS_KEY: test
      AWS_ACCESS_KEY_ID: test
      AWS_REGION: us-east-1
  # ... existing config

Warning

Even though we aren't connecting to a real AWS account, the environment variables are still needed.

Update code

Run this command to start an interactive shell:

docker compose run -it --rm --no-deps --entrypoint /bin/sh -v ./nodejs:/var/task lambda

Tip

The --no-deps makes sure we are only running the lambda container and ignoring any others.

Install the SDK:

npm install @aws-sdk/client-s3

Exit the container

exit

Next, update nodejs/src/index.ts with the S3 client logic:

import { Handler } from "aws-lambda";
import { S3Client, ListBucketsCommand } from "@aws-sdk/client-s3";

const client = new S3Client({
  endpoint: process.env.AWS_ENDPOINT_URL, // Points to LocalStack
  forcePathStyle: true, // Required for local mocking
  region: process.env.AWS_REGION,
});

export const handler: Handler = async (event, context) => {
  console.log("Hello world!");
  console.log({ event, context });

  try {
    const command = new ListBucketsCommand({});
    const response = await client.send(command);

    console.log("S3 Buckets:", response.Buckets);

    return {
      statusCode: 200,
      body: JSON.stringify(response.Buckets || []),
    };
  } catch (error) {
    console.error(error);
    return {
      statusCode: 500,
      body: "Error connecting to S3",
    };
  }
};

Final run

Run the following command:

docker compose up --abort-on-container-exit --build

Note

During this build you'll notice both the npm ci and npm run build layers are rebuilt.
Also, as we haven't actually created any buckets, receiving an empty array is the correct response.


7. Swapping runtimes

Goal: Demonstrate the versatility of Docker by swapping to Python.

Tip

One of the biggest advantages of developing Lambdas with Docker is that the infrastructure pattern remains exactly the same, regardless of the language you use.

Create a Python Dockerfile

Create python/Dockerfile with the following content:

FROM public.ecr.aws/lambda/python:3.14

COPY ./ ${LAMBDA_TASK_ROOT}

CMD [ "app.handler" ]

Create the Python handler

Create the handler file at python/app.py:

def handler(event, context):
    return "Hello World!"

Update docker-compose.yaml

Update the lambda service in docker-compose.yaml to point to the Python folder:

services:
  lambda:
    build: ./python

Note

Notice how we haven't actually changed anything else within the Lambda setup.
Despite using a different runtime, the way in which the Lambda is executed/invoked is exactly the same.

Run it

docker compose up --abort-on-container-exit --build

Note

You will see the build process switch to pulling the Python base image, but the curl command and event injection work exactly the same way.


8. Cleanup

Goal: Remove containers and reclaim disk space.

Since we are done with the workshop, let's remove the resources we created.

Run the following command:

docker compose ps -a

Note

Even though they're not running, we still have this images sitting there doing nothing.

Run the following command:

docker compose images

Note

We also have these images which are taking up resources on our machine.

Run the following command:

docker compose down --rmi all

Note

This stops all services, removes the containers/networks, and deletes all images used by this project (including cURL, LocalStack, and the custom image we built).

Tip

You may have some dangling images where we've made changes through out this workshop, run the following to clean them up:

docker image prune

Warning

If you followed the prerequisites to run docker load this command will not actually remove all images, the lambda/node and lambda/python images still exist.

To remove these you'll need to run the following:

docker rmi public.ecr.aws/lambda/nodejs:24
docker rmi public.ecr.aws/lambda/python:3.14

πŸŽ‰ Congratulations

You have built a clean, modular, serverless development environment.

About

Learn to build and develop AWS Lambda functions locally with Docker

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks