Skip to content

BayajidAlam/node-fleet

Repository files navigation

node-fleet K3s Autoscaler

Smart autoscaling for K3s on AWS EC2, built to cut idle cost and prevent flash-sale outages.

System Architecture

Quick Scan

  • Problem: 5 workers running 24/7 caused high off-peak waste and slow manual scaling.
  • Goal: automated scaling in under 3 minutes, safer scale-down, and lower monthly cost.
  • Result: production-oriented autoscaler with Prometheus + Lambda + DynamoDB lock + EC2 lifecycle automation.
  • Estimated savings: about 54-58% based on current project analysis.
Setup Monthly Cost Savings
Static worker fleet $180 0%
node-fleet autoscaling $70-$83 54-58%

1) Project Overview

2) Architecture Explanation

3) Technology Stack

  • IaC: Pulumi (TypeScript) for strongly typed infrastructure workflows
  • Autoscaler runtime: AWS Lambda (Python 3.11)
  • State & lock: DynamoDB conditional-write lock pattern
  • Metrics: Prometheus + kube-state-metrics + Grafana
  • Compute: K3s on EC2 (On-Demand + Spot mix)

Implementation details:

4) Setup Instructions

Prerequisites:

  • AWS CLI configured
  • Pulumi CLI
  • Node.js 18+
  • Python 3.11+
  • kubectl

Deploy:

# Infrastructure
cd pulumi
pulumi up --yes

# Full deployment helper
cd ..
./deploy.sh <master-public-ip>

Verification:

kubectl get nodes
bash scripts/verify-autoscaler-requirements.sh

Runbook:

5) Lambda Function Logic

Algorithm notes:

6) Prometheus Configuration

Value screenshots:

Cluster Dashboard Scale Up Alarm Scale Down Alarm

7) DynamoDB Schema

8) Testing Results

Evidence screenshots:

Lambda Infra Tests Scaling Tests System Core Tests

9) Troubleshooting Guide

10) Cost Analysis

11) Security Checklist

Docs Index

License

MIT

About

AWS K3s autoscaler reducing costs by 40-56% via Lambda, Prometheus metrics, and intelligent EC2 scaling with spot instances, Multi-AZ support, and predictive scaling.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors