This directory contains Kubernetes manifests organized using Kustomize for the Spring Boot application deployment across different environments.
k8s/
├── base/ # Base manifests (common across environments)
│ ├── kustomization.yaml # Base kustomization
│ ├── namespace.yaml # Namespace definition
│ ├── serviceaccount.yaml # Service account
│ ├── configmap.yaml # Application configuration
│ ├── deployment.yaml # Application deployment
│ ├── service.yaml # Service definition
│ ├── hpa.yaml # Horizontal Pod Autoscaler
│ ├── pdb.yaml # Pod Disruption Budget
│ ├── ingress.yaml # Ingress configuration
│ ├── external-secrets.yaml # External Secrets Operator config
│ ├── rbac.yaml # RBAC configuration
│ └── network-policy.yaml # Network policies
├── database/
│ └── base/ # CloudNativePG database manifests
│ ├── kustomization.yaml # Database kustomization
│ ├── cluster.yaml # CloudNativePG Cluster CRD
│ ├── scheduled-backup.yaml # CloudNativePG ScheduledBackup CRD
│ ├── object-store.yaml # CloudNativePG ObjectStore for S3 backups
│ ├── service.yaml # Service documentation (auto-created by operator)
│ └── network-policy.yaml # Database network policy
└── overlays/ # Environment-specific overlays
├── pr-template/ # PR environment
│ └── kustomization.yaml
├── staging/ # Staging environment
│ └── kustomization.yaml
└── production/ # Production environment
├── kustomization.yaml
└── postgres-replica.yaml # Production read replicas
- CloudNativePG Operator installed in the cluster
- External Secrets Operator (ESO) installed in the cluster
- HashiCorp Vault configured and accessible
- Prometheus Operator for ServiceMonitor resources
- cert-manager for TLS certificate management
- Ingress Controller (nginx) configured
- S3-compatible storage (MinIO or Ceph) for database backups
# Install CloudNativePG operator
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.24/releases/cnpg-1.24.0.yaml
# Verify installation
kubectl get pods -n cnpg-systemEnsure you have MinIO or Ceph S3 endpoint available and create the necessary secrets:
# Create S3 credentials secret
kubectl create secret generic s3local-eu-central \
--from-literal=ACCESS_KEY_ID=your-access-key \
--from-literal=ACCESS_SECRET_KEY=your-secret-key \
-n spring-app-productionhelm repo add external-secrets https://charts.external-secrets.io
helm install external-secrets external-secrets/external-secrets -n external-secrets-system --create-namespaceCreate a Kubernetes service account for Vault authentication:
kubectl create serviceaccount vault-auth -n spring-appConfigure Vault Kubernetes auth method:
# Enable Kubernetes auth in Vault
vault auth enable kubernetes
# Configure Kubernetes auth
vault write auth/kubernetes/config \
token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
# Create Vault policy for spring-app
vault policy write spring-app-policy - <<EOF
path "secret/data/spring-app/*" {
capabilities = ["read"]
}
EOF
# Create Vault role
vault write auth/kubernetes/role/spring-app \
bound_service_account_names=spring-app \
bound_service_account_namespaces=spring-app-dev,spring-app-staging,spring-app-production \
policies=spring-app-policy \
ttl=24h# Application secrets
vault kv put secret/spring-app/database \
username=appuser \
password=secure-password
vault kv put secret/spring-app/api \
key=your-api-key
vault kv put secret/spring-app/auth \
jwt_secret=your-jwt-secret
# PostgreSQL secrets
vault kv put secret/spring-app/postgres \
username=postgres \
password=postgres-password \
database=appdb# Deploy to development
kubectl apply -k overlays/development/
# Verify deployment
kubectl get all -n spring-app-dev
kubectl get externalsecrets -n spring-app-dev# Deploy to staging
kubectl apply -k overlays/staging/
# Verify deployment
kubectl get all -n spring-app-staging
kubectl get externalsecrets -n spring-app-staging# Deploy to production
kubectl apply -k overlays/production/
# Verify deployment
kubectl get all -n spring-app-production
kubectl get externalsecrets -n spring-app-production| Component | PR | Staging | Production |
|---|---|---|---|
| Replicas | 1 | 2 | 5 |
| Memory Request | 256Mi | 512Mi | 1Gi |
| Memory Limit | 512Mi | 1Gi | 2Gi |
| CPU Request | 250m | 250m | 500m |
| CPU Limit | 500m | 500m | 1000m |
| HPA Min/Max | 1/3 | 2/5 | 5/20 |
| PDB Min Available | 1 | 1 | 3 |
| Database Storage | 50GI | 50GI | 100GI |
| Read Replicas | No | No | Yes (2) |
| Log Level | DEBUG | INFO | WARN |
The manifests include:
- ServiceMonitor for Prometheus metrics scraping
- PrometheusRule for application-specific alerts
- Network Policies allowing monitoring traffic
- Grafana Dashboard ConfigMap
- Pod Security Standards enforcement
- Network Policies for micro-segmentation
- RBAC with least privilege access
- External Secrets for secure secret management
- Security Contexts with non-root users
- Read-only root filesystem where possible
- Automated Failover: Continuous monitoring with automatic replica promotion within seconds
- Simplified Backups & PITR: Declarative backup configuration with WAL archiving to S3-compatible storage
- Managed Read Replicas: Scaling read capacity by changing the
instancesnumber in the manifest - Zero-Downtime Upgrades: Rolling updates for PostgreSQL minor versions
- Integrated Monitoring: Automatic PodMonitor creation for Prometheus metrics
- Daily automated backups via CloudNativePG ScheduledBackup CRD
- 7-day retention policy with automatic cleanup
- S3-compatible object storage (MinIO/Ceph) for backup storage
- Point-in-time recovery with WAL archiving
- Cross-region replication for disaster recovery
CloudNativePG automatically creates the following services:
postgres-app-cluster-rw- Read-write service for primary instancepostgres-app-cluster-ro- Read-only service for replica instancespostgres-app-cluster-r- Read service for all instances
Application connection strings:
- Write operations:
postgres-app-cluster-rw:5432 - Read-only operations:
postgres-app-cluster-ro:5432
# Check SecretStore status
kubectl get secretstore -n spring-app-dev
kubectl describe secretstore vault-secret-store -n spring-app-dev
# Check ExternalSecret status
kubectl get externalsecrets -n spring-app-dev
kubectl describe externalsecret app-secrets -n spring-app-dev
# Check if secrets are created
kubectl get secrets -n spring-app-dev# Check Vault connectivity from pod
kubectl run vault-test --rm -it --image=vault:latest -- sh
vault status -address=https://vault.domain.local# Check application logs
kubectl logs -f deployment/spring-app -n spring-app-dev
# Check all pods in namespace
kubectl logs -f -l app=spring-app -n spring-app-dev# Connect to PostgreSQL primary
kubectl exec -it postgres-app-cluster-1 -n spring-app-dev -- psql -U postgres -d app
# Check cluster status
kubectl get cluster postgres-app-cluster -n spring-app-dev
# Check backup status
kubectl get scheduledbackup -n spring-app-dev
# Check database logs
kubectl logs -f postgres-app-cluster-1 -n spring-app-devFor GitOps deployment with ArgoCD, create Application manifests:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: spring-app-dev
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/ariaskeneth/spring-app-config
targetRevision: HEAD
path: k8s/overlays/development
destination:
server: https://kubernetes.default.svc
namespace: spring-app-dev
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- CreateNamespace=trueTo customize for your environment:
- Update image references in
kustomization.yamlfiles - Modify resource limits in overlay patches
- Update Vault server URLs in SecretStore configurations
- Adjust ingress hostnames in overlay patches
- Configure storage classes for your cluster
- Always use overlays for environment-specific changes
- Keep base manifests generic and reusable
- Use semantic versioning for image tags
- Monitor External Secrets for sync status
- Regularly rotate secrets in Vault
- Test deployments in development first
- Use resource quotas to prevent resource exhaustion
- Implement proper monitoring and alerting