This guide walks you through setting up a robust local Serverless development environment using Docker, AWS Lambda, TypeScript, and LocalStack.
It focuses on emulating the cloud runtime entirely offline, optimising production images with multi-stage builds, and mocking external services like S3 to create a complete, cost-free development workflow.
Before beginning this workshop, please ensure your environment is correctly set up by following the instructions in our prerequisites documentation:
β‘οΈ Prerequisites guide
Caution
This only works when attending a workshop in person.
Due to having a number of people trying to retrieve Docker images at the same time, this allows for a more efficient way.
If you are NOT in an in-person workshop, continue to the workshop, Docker images will be pulled as needed.
Once the facilitator has given you an IP address, open http://<IP-ADDRESS>:8000 in your browser.
When you see the file listing, download the workshop-images.tar file.
Warning
Your browser may block the download initially, when prompted, allow it to download.
Run the following command:
docker load -i ~/Downloads/workshop-images.tarRun the following command:
docker imagesNote
You should now see four images listed.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
localstack/localstack latest de4d3256398a 25 hours ago 1.17GB
public.ecr.aws/lambda/python 3.14 983ca119258a 3 days ago 584MB
public.ecr.aws/lambda/nodejs 24 30d41baede74 3 days ago 449MB
curlimages/curl latest 26c487d15124 2 weeks ago 24.5MBImage IDs, created and sizes may vary.
Goal: Get a working container environment running.
Create a new folder for your project:
mkdir -p ~/Documents/daemon-labs/docker-aws-lambdaNote
You can either create this via a terminal window or your file explorer.
Tip
If you are using VSCode, we can now do everything from within the code editor.
You can open the terminal pane via Terminal -> New Terminal.
We keep our application code separate from infrastructure config.
mkdir ./nodejsCreate the file at nodejs/Dockerfile (inside the subdirectory).
FROM public.ecr.aws/lambda/nodejs:24Create this file in the root of your project.
services:
lambda:
build: ./nodejsRun the following command:
docker compose buildNote
At this stage, if you loaded the Docker images as part of the prerequisites, if you run docker images you should see the lambda/nodejs image as well as a new image which is the same size.
Run this command to start an interactive shell:
docker compose run -it --rm --entrypoint /bin/sh -v ./nodejs:/var/task lambdaWarning
Due to AWS not creating multi-platform images we need to start an interactive shell rather than passing in commands.
For example, if we were to run the following command:
docker compose run -it --rm --entrypoint /bin/sh -v ./nodejs:/var/task lambda node --versionIn some cases, we would receive the error /var/lang/bin/node: /var/lang/bin/node: cannot execute binary file.
Run the following command:
node --versionNote
The output should start with v24 followed by the latest minor and patch version.
Goal: Initialise a TypeScript Node.js project.
Inside the container shell:
npm init -yNote
Notice how the nodejs/package.json file is automatically created on your host machine due to the volume mount.
npm add --save-dev @types/node@24 @types/aws-lambda @tsconfig/recommended typescriptNote
Notice this automatically creates a nodejs/package-lock.json file as well as the nodejs/node_modules directory.
exitNote
At this stage, we no longer need the interactive shell and can return to the code editor.
Even though dependencies have been installed, if you run docker images again, you'll see the image size hasn't changed because the node_modules were written to your local volume, not via an image layer.
Create nodejs/tsconfig.json locally:
{
"extends": "@tsconfig/recommended/tsconfig.json",
"compilerOptions": {
"outDir": "./build"
}
}Note
While you could auto-generate this file, our manual configuration using a recommended preset keeps the file minimal and clean.
Create nodejs/src/index.ts:
import { Handler } from "aws-lambda";
export const handler: Handler = async (event, context) => {
console.log({ event, context });
return {
statusCode: 200,
body: { event, context },
};
};Update nodejs/package.json scripts:
"build": "tsc"Note
At this stage we have the main building blocks for the application, but our runtime doesn't know what to do with them.
Goal: Make the container act like a real Lambda server.
Create nodejs/.dockerignore (inside the subdirectory):
build
node_modules
Note
We're making sure that no matter where we're building the image it never loads in any built files or local node_modules.
That way, whenever we're building it is done in an identical way and reduces the possibility of "it worked on my machine".
Update nodejs/Dockerfile:
FROM public.ecr.aws/lambda/nodejs:24
COPY ./ ${LAMBDA_TASK_ROOT}
RUN npm ci && npm run build
CMD [ "build/index.handler" ]Run the following command:
docker compose buildNote
As we're now doing the dependency install as part of the build, when you run docker images you'll notice our Docker image has increased in size.
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
your-lambda latest 05b92630088f 3 seconds ago 483MB
public.ecr.aws/lambda/nodejs 24 30d41baede74 3 days ago 449MBTip
When running docker images you might notice that you have got a dangling image that looks a bit like this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
your-lambda latest 05b92630088f 3 seconds ago 483MB
public.ecr.aws/lambda/nodejs 24 30d41baede74 3 days ago 449MB
<none> <none> 17e6c55f785f 3 days ago 449MBWhen you rebuilt the image, Docker moved the "nametag" to your new version, leaving the old version behind as a nameless orphan.
Any dangling images can be cleaned with the following command:
docker image pruneUpdate docker-compose.yaml:
lambda:
build: ./nodejs
healthcheck:
test:
- CMD
- curl
- -I
- http://localhost:8080
interval: 1s
timeout: 1s
retries: 30Tip
The healthcheck allows Docker (and us) to know when a container is up and running as expected.
If you were to run docker compose up and then run docker ps in a different terminal window while our containers were starting up you might see the following:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bf2696aeaabf your-lambda "/lambda-entrypoint.β¦" 1 second ago Up Less than a second (health: starting) your-lambda-1If you ran docker ps once the container was able to pass the healthcheck you would hopefully see the following:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
bf2696aeaabf your-lambda "/lambda-entrypoint.β¦" 36 seconds ago Up 35 seconds (healthy) your-lambda-1If the container wasn't able to pass the healthcheck then you would eventually see unhealthy instead.
If you did run docker compose up you will need press Ctrl+C on your keyboard to exit the container.
Update docker-compose.yaml (in the root) to include a service that triggers our Lambda.
services:
curl:
image: curlimages/curl
depends_on:
lambda:
condition: service_healthy
command:
- -s
- -d {}
- http://lambda:8080/2015-03-31/functions/function/invocations
# ... existing configNote
As we have the healthcheck in place, we can actually tell the curl container not to start until it gets that healthy response.
Run the following command:
docker compose upWarning
The problem with this specific command is that the Lambda continues to run despite the cURL container running and exiting.
Exit your container by pressing Ctrl+C on your keyboard.
Run the following command:
docker compose up --abort-on-container-exitTip
With this extra attribute, we've told Docker to terminate all other running containers when one exits.
Goal: Simulate real-world events and environments.
Update docker-compose.yaml:
services:
# ... existing config
lambda:
# ... existing config
environment:
AWS_LAMBDA_FUNCTION_MEMORY_SIZE: 128
AWS_LAMBDA_FUNCTION_TIMEOUT: 3
AWS_LAMBDA_LOG_FORMAT: JSONRun the following command:
docker compose up --abort-on-container-exitNote
On this execution you'll be able to confirm two of the values are working.
Find the Lambda REPORT log and you'll now see Memory Size and Max Memory Used are set to 128 MB instead of the previous 3008 MB.
Find the log for event and context and you'll see it has now switched to a JSON structured log rather than just broken text.
Update docker-compose.yaml:
AWS_LAMBDA_FUNCTION_TIMEOUT: 0Run the following command:
docker compose up --abort-on-container-exitNote
On this execution you'll see that the curl container received Task timed out after 0.00 seconds.
Find the Lambda REPORT again and you'll see Init Duration, Duration and Billed Duration are all set to 0 ms.
Be sure to set AWS_LAMBDA_FUNCTION_TIMEOUT back to 3 now.
Create the events subdirectory in the root (keep events outside the code folder):
mkdir ./eventsCreate events/custom.json:
{
"user": "Alice"
}Create events/api-gateway.json:
{
"resource": "/",
"path": "/",
"httpMethod": "POST",
"body": "{\"user\": \"Alice\"}",
"isBase64Encoded": false
}Note
Lambdas can technically receive any payload, but can also be invoked from other AWS services, so it is very useful to replicate this as much as possible.
Update docker-compose.yaml:
services:
curl:
# ... existing config
command:
- -s
- -d
- ${LAMBDA_INPUT:-{}}
- http://lambda:8080/2015-03-31/functions/function/invocations
volumes:
- ./events:/events:ro
# ... existing configdocker compose up --abort-on-container-exitLAMBDA_INPUT=@/events/custom.json docker compose up --abort-on-container-exitLAMBDA_INPUT=@/events/api-gateway.json docker compose up --abort-on-container-exitNote
With each of these commands, you'll notice that the curl container receives a slightly different response where the event changes.
The first command we didn't include the LAMBDA_INPUT attribute, so you docker-compose.yaml default the input to {}.
Update nodejs/src/index.ts to include a new log:
import { Handler } from "aws-lambda";
export const handler: Handler = async (event, context) => {
console.log("Hello world!");
console.log({ event, context });
return {
statusCode: 200,
body: { event, context },
};
};Run the following command:
docker compose up --abort-on-container-exitWarning
Where's the log? Nothing has actually updated.
As we're running the containers and stopping them each time, we need to let Docker know about any changes.
Run the following command:
docker compose up --abort-on-container-exit --buildNote
Now, each time we run the containers, Docker is re-building everything and picking up any new changes.
Tip
Even though Docker is technically re-building each and every time, if there are no new changes, Docker will use cached layers resulting in faster executions.
Goal: Prepare for production with improved caching and multi-stage builds.
Replace nodejs/Dockerfile with this cached optimised version:
FROM public.ecr.aws/lambda/nodejs:24
COPY ./package*.json ${LAMBDA_TASK_ROOT}
RUN npm ci
COPY ./ ${LAMBDA_TASK_ROOT}
RUN npm run build
CMD [ "build/index.handler" ]Run the following command:
docker compose up --abort-on-container-exit --buildTip
In this iteration, as npm ci and npm run build are two different layers, when one changes it doesn't impact the other.
For example, if we update our code without updating any packages, the npm ci can still use it's cached version where as the npm run build will get rebuilt.
=> CACHED [2/5] COPY ./package*.json /var/task
=> CACHED [3/5] RUN npm ci
=> [4/5] COPY ./ /var/task
=> [5/5] RUN npm run build
Replace nodejs/Dockerfile with this optimised version:
FROM public.ecr.aws/lambda/nodejs:24 AS base
FROM base AS builder
COPY ./package*.json ${LAMBDA_TASK_ROOT}
RUN npm ci
COPY ./ ${LAMBDA_TASK_ROOT}
RUN npm run build
FROM base
COPY --from=builder ${LAMBDA_TASK_ROOT}/package*.json ${LAMBDA_TASK_ROOT}
RUN npm ci --only=production
COPY --from=builder ${LAMBDA_TASK_ROOT}/build ${LAMBDA_TASK_ROOT}/build
CMD [ "build/index.handler" ]Run the following command:
docker compose up --abort-on-container-exit --buildNote
In this iteration, our built image only includes the files needed to actually be executed.
This means our Docker images has a reduced size but also any potential security risk of the development dependencies are removed.
Goal: Connect to LocalStack.
services:
# ... existing config
localstack:
image: localstack/localstack
healthcheck:
test:
- CMD
- curl
- -I
- http://localhost:4566/_localstack/health
interval: 1s
timeout: 1s
retries: 30Update docker-compose.yaml:
services:
# ... existing config
lambda:
# ... existing config
depends_on:
localstack:
condition: service_healthy
environment:
AWS_LAMBDA_FUNCTION_MEMORY_SIZE: 128
AWS_LAMBDA_FUNCTION_TIMEOUT: 3
AWS_LAMBDA_LOG_FORMAT: JSON
AWS_ENDPOINT_URL: http://localstack:4566
AWS_SECRET_ACCESS_KEY: test
AWS_ACCESS_KEY_ID: test
AWS_REGION: us-east-1
# ... existing configWarning
Even though we aren't connecting to a real AWS account, the environment variables are still needed.
Run this command to start an interactive shell:
docker compose run -it --rm --no-deps --entrypoint /bin/sh -v ./nodejs:/var/task lambdaTip
The --no-deps makes sure we are only running the lambda container and ignoring any others.
Install the SDK:
npm install @aws-sdk/client-s3Exit the container
exitNext, update nodejs/src/index.ts with the S3 client logic:
import { Handler } from "aws-lambda";
import { S3Client, ListBucketsCommand } from "@aws-sdk/client-s3";
const client = new S3Client({
endpoint: process.env.AWS_ENDPOINT_URL, // Points to LocalStack
forcePathStyle: true, // Required for local mocking
region: process.env.AWS_REGION,
});
export const handler: Handler = async (event, context) => {
console.log("Hello world!");
console.log({ event, context });
try {
const command = new ListBucketsCommand({});
const response = await client.send(command);
console.log("S3 Buckets:", response.Buckets);
return {
statusCode: 200,
body: JSON.stringify(response.Buckets || []),
};
} catch (error) {
console.error(error);
return {
statusCode: 500,
body: "Error connecting to S3",
};
}
};Run the following command:
docker compose up --abort-on-container-exit --buildNote
During this build you'll notice both the npm ci and npm run build layers are rebuilt.
Also, as we haven't actually created any buckets, receiving an empty array is the correct response.
Goal: Demonstrate the versatility of Docker by swapping to Python.
Tip
One of the biggest advantages of developing Lambdas with Docker is that the infrastructure pattern remains exactly the same, regardless of the language you use.
Create python/Dockerfile with the following content:
FROM public.ecr.aws/lambda/python:3.14
COPY ./ ${LAMBDA_TASK_ROOT}
CMD [ "app.handler" ]Create the handler file at python/app.py:
def handler(event, context):
return "Hello World!"Update the lambda service in docker-compose.yaml to point to the Python folder:
services:
lambda:
build: ./pythonNote
Notice how we haven't actually changed anything else within the Lambda setup.
Despite using a different runtime, the way in which the Lambda is executed/invoked is exactly the same.
docker compose up --abort-on-container-exit --buildNote
You will see the build process switch to pulling the Python base image, but the curl command and event injection work exactly the same way.
Goal: Remove containers and reclaim disk space.
Since we are done with the workshop, let's remove the resources we created.
Run the following command:
docker compose ps -aNote
Even though they're not running, we still have this images sitting there doing nothing.
Run the following command:
docker compose imagesNote
We also have these images which are taking up resources on our machine.
Run the following command:
docker compose down --rmi allNote
This stops all services, removes the containers/networks, and deletes all images used by this project (including cURL, LocalStack, and the custom image we built).
Tip
You may have some dangling images where we've made changes through out this workshop, run the following to clean them up:
docker image pruneWarning
If you followed the prerequisites to run docker load this command will not actually remove all images, the lambda/node and lambda/python images still exist.
To remove these you'll need to run the following:
docker rmi public.ecr.aws/lambda/nodejs:24docker rmi public.ecr.aws/lambda/python:3.14You have built a clean, modular, serverless development environment.