Skip to content

Database Issues

genesluna edited this page Jan 2, 2026 · 2 revisions

💾 Database Issues

This page covers issues related to PostgreSQL, MongoDB, Redis, MinIO, and MariaDB databases in PR environments.


🐘 PostgreSQL Issues

Database Not Deployed

Symptoms:

  • No PostgreSQL pod exists in the namespace
  • App logs show ECONNREFUSED to PostgreSQL
  • kubectl get pods -n k8s-ee-pr-{number} shows no PostgreSQL pod

Diagnosis:

# Check if database is enabled in Helm values
helm get values app -n k8s-ee-pr-{number} | grep -A5 postgresql

# Check CloudNativePG cluster status
kubectl get clusters.postgresql.cnpg.io -n k8s-ee-pr-{number}

Resolution:

Update your k8s-ee.yaml:

databases:
  postgresql: true  # Simple boolean form
  # OR object form with custom settings:
  # postgresql:
  #   enabled: true
  #   version: "16"
  #   storage: 2Gi

Push the change to trigger a new deployment.


Connection Refused

Symptoms:

  • App logs show ECONNREFUSED to PostgreSQL
  • Database endpoints not ready

Diagnosis:

# Check PostgreSQL cluster
kubectl get clusters.postgresql.cnpg.io -n k8s-ee-pr-{number}

# Check cluster pods
kubectl get pods -n k8s-ee-pr-{number} -l cnpg.io/cluster

# Check service endpoints
kubectl get endpoints -n k8s-ee-pr-{number} | grep postgresql

Common Causes:

Cause Solution
Cluster not ready Wait for Cluster is Ready status
Pod crashed Check PostgreSQL pod logs
Service missing Verify Helm release deployed

Resolution:

# Check cluster status details
kubectl describe clusters.postgresql.cnpg.io -n k8s-ee-pr-{number}

# View PostgreSQL logs
kubectl logs -n k8s-ee-pr-{number} -l cnpg.io/cluster

# Restart cluster (last resort)
kubectl delete pods -n k8s-ee-pr-{number} -l cnpg.io/cluster

Authentication Failed

Symptoms:

  • password authentication failed in logs
  • App can't connect despite database running

Diagnosis:

# Check secret exists
kubectl get secret -n k8s-ee-pr-{number} | grep postgresql

# Verify secret contents
kubectl get secret k8s-ee-pr-{number}-postgresql-app -n k8s-ee-pr-{number} -o yaml

Resolution:

# Decode and verify password
kubectl get secret k8s-ee-pr-{number}-postgresql-app -n k8s-ee-pr-{number} \
  -o jsonpath='{.data.password}' | base64 -d

# Test connection manually
kubectl run psql --rm -it --image=postgres:16 -n k8s-ee-pr-{number} -- \
  psql "postgresql://app:$(kubectl get secret k8s-ee-pr-{number}-postgresql-app \
  -n k8s-ee-pr-{number} -o jsonpath='{.data.password}' | base64 -d)@k8s-ee-pr-{number}-postgresql-rw:5432/app"

Bootstrap SQL Not Applied

Symptoms:

  • Tables from postInitApplicationSQL don't exist
  • Database schema is empty after deployment
  • Bootstrap SQL changes not reflected in database

Diagnosis:

# Check if table exists
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c '\dt'

# Check initdb pod logs for bootstrap execution
kubectl logs -n k8s-ee-pr-{number} -l job-name --tail=100

Common Causes:

Cause Solution
Cluster existed before SQL change Delete cluster to trigger re-init
Using initSQL instead of postInitApplicationSQL initSQL runs on postgres database, not app database
Dollar-quote syntax error Use $func$ instead of $$ for function bodies

⚠️ Important: Bootstrap SQL only runs during initial cluster creation. Modifying postInitApplicationSQL in values.yaml won't affect existing clusters.

Resolution:

# Option 1: Delete the PostgreSQL cluster to trigger re-init (data will be lost!)
kubectl delete cluster -n k8s-ee-pr-{number} k8s-ee-pr-{number}-postgresql

# Re-run Helm to recreate the cluster with bootstrap SQL
helm upgrade app oci://ghcr.io/koder-cat/k8s-ephemeral-environments/charts/k8s-ee-app \
  --namespace k8s-ee-pr-{number} --reuse-values

# Option 2: Manually apply SQL to existing database (keeps data)
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c "
    CREATE TABLE IF NOT EXISTS your_table (...);
    GRANT ALL PRIVILEGES ON your_table TO app;
  "

# Option 3: Close and reopen the PR (easiest)

Permission Denied on Tables

Symptoms:

  • App logs show permission denied for table <table_name>
  • Database operations fail despite table existing
  • CRUD endpoints return 500 errors

Diagnosis:

# Check table ownership
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c '\dt'

# Check current grants
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c '\dp test_records'

Resolution:

# Apply grants manually for immediate fix
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c \
  "GRANT ALL PRIVILEGES ON test_records TO app; \
   GRANT USAGE, SELECT ON SEQUENCE test_records_id_seq TO app;"

💡 Prevention: Always include GRANT statements in your postInitApplicationSQL.


🍃 MongoDB Issues

MongoDB Authorization Errors

Symptoms:

  • App logs show not authorized on admin to execute command
  • Audit service fails to log events
  • /api/audit/events returns 400 or 500 errors
  • Console shows "Cannot read properties of undefined"

Error Message:

{
  "errmsg": "not authorized on admin to execute command { insert: \"audit_events\" ... $db: \"admin\" }",
  "code": 13,
  "codeName": "Unauthorized"
}

Root Cause:

The MongoDB connection string uses /admin for authentication (required by MongoDB), but the app was trying to use the admin database for storing data instead of the app database.

Diagnosis:

# Check MongoDB connection string
kubectl get secret -n k8s-ee-pr-{number} app-mongodb-admin-app \
  -o jsonpath='{.data.connectionString\.standard}' | base64 -d

# Look for /admin in the connection string - that's the auth database, not data database
# mongodb://app:xxx@host:27017/admin?replicaSet=app-mongodb

# Check app logs for auth errors
kubectl logs -n k8s-ee-pr-{number} -l app.kubernetes.io/name=app | grep -i "not authorized"

Resolution:

The audit service should explicitly specify the database name:

const dbName = process.env.MONGODB_DATABASE || 'app';
this.db = this.client.db(dbName);  // NOT client.db() which defaults to 'admin'

If you see this error on older deployments, redeploy the app to pick up the fix.


MongoDB RBAC Issues

Symptoms:

  • MongoDB StatefulSet fails to create pods
  • Error: serviceaccount "mongodb-database" not found
  • Agent readiness probe fails with Error verifying agent is ready

The MongoDB Community Operator requires:

  1. ServiceAccount - Named mongodb-database
  2. Role/RoleBinding - Agent needs secrets:get and pods:get,patch

Diagnosis:

kubectl get sa -n k8s-ee-pr-{number} | grep mongodb
kubectl get role -n k8s-ee-pr-{number} | grep mongodb
kubectl get rolebinding -n k8s-ee-pr-{number} | grep mongodb

Resolution:

# Create ServiceAccount if missing
kubectl create serviceaccount mongodb-database -n k8s-ee-pr-{number}

# The chart should create Role/RoleBinding automatically
# If missing, check Helm deployment logs

MongoDB Pod Not Starting

Diagnosis:

# Check MongoDB community resource
kubectl get mongodbcommunity -n k8s-ee-pr-{number}

# Check MongoDB operator logs
kubectl logs -n mongodb-operator-system -l app.kubernetes.io/name=mongodb-kubernetes-operator

# Check MongoDB pod events
kubectl describe pod -n k8s-ee-pr-{number} -l app=mongodb

🍅 Redis Issues

Redis Connection Problems

Symptoms:

  • App logs show ECONNREFUSED to Redis
  • Caching not working as expected

Diagnosis:

# Check Redis pod status
kubectl get pods -n k8s-ee-pr-{number} -l app=redis

# Check Redis service
kubectl get svc -n k8s-ee-pr-{number} | grep redis

# Test Redis connection
kubectl run redis-test --rm -it --image=redis:7-alpine -n k8s-ee-pr-{number} -- \
  redis-cli -h app-redis ping

Environment Variables:

Redis provides these environment variables to the app:

Variable Description
REDIS_HOST Redis service hostname
REDIS_PORT Redis port (6379)
REDIS_PASSWORD Redis authentication password

💡 Note: App constructs the Redis URL from these variables. There is no REDIS_URL environment variable.


🪣 MinIO Issues

MinIO Not Accessible

Symptoms:

  • File uploads failing
  • S3 operations returning errors

Diagnosis:

# Check MinIO pod status
kubectl get pods -n k8s-ee-pr-{number} -l app.kubernetes.io/name=minio

# Check MinIO health
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l app.kubernetes.io/name=minio -o name | head -1) -- curl -sf http://localhost:9000/minio/health/live

# Check MinIO logs
kubectl logs -n k8s-ee-pr-{number} -l app.kubernetes.io/name=minio

Environment Variables:

MinIO provides these environment variables:

Variable Alias Description
MINIO_ENDPOINT S3_ENDPOINT MinIO service hostname
MINIO_PORT S3_PORT MinIO port (9000)
MINIO_ACCESS_KEY S3_ACCESS_KEY Access key
MINIO_SECRET_KEY S3_SECRET_KEY Secret key
MINIO_BUCKET S3_BUCKET Default bucket name

🐬 MariaDB Issues

MariaDB Connection Problems

Symptoms:

  • App logs show connection errors to MariaDB
  • DATABASE_TYPE shows mariadb but connection fails

Diagnosis:

# Check MariaDB pod status
kubectl get pods -n k8s-ee-pr-{number} -l app.kubernetes.io/name=mariadb

# Check MariaDB service
kubectl get svc -n k8s-ee-pr-{number} | grep mariadb

# Test MariaDB connection
kubectl run mariadb-test --rm -it --image=mariadb:11 -n k8s-ee-pr-{number} -- \
  mariadb -h app-mariadb -u root -p$MYSQL_ROOT_PASSWORD -e "SELECT 1"

Environment Variables:

MariaDB provides these environment variables:

Variable Description
DATABASE_TYPE Set to mariadb
MYSQL_URL Full connection URL
MYSQL_HOST MariaDB service hostname
MYSQL_PORT MariaDB port (3306)
MYSQL_DATABASE Database name
MYSQL_USER Database user
MYSQL_PASSWORD Database password

⚠️ Warning: Only enable one SQL database (PostgreSQL OR MariaDB) at a time. MariaDB takes precedence if both are enabled.


🔄 Migration Issues

Migration Failed at Startup

Symptoms:

  • App crashes immediately with migration error
  • Logs show "Migration failed" or SQL errors
  • Pod in CrashLoopBackOff with migration stack traces

Diagnosis:

# Check app logs for migration errors
kubectl logs -n k8s-ee-pr-{number} -l k8s-ee/project-id={projectId}

# Connect to database directly to check state
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c \
  'SELECT * FROM drizzle.__drizzle_migrations ORDER BY created_at DESC LIMIT 5;'

Common Causes:

Cause Solution
Database not ready App tried to migrate before DB was available
Conflicting migration Schema change conflicts with existing state
Missing drizzle folder Migrations not included in Docker build
Network policy blocking App can't reach PostgreSQL service

Resolution:

# Option 1: For ephemeral environments, close and reopen PR

# Option 2: Check migration status
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c '\dt'

# Option 3: Verify drizzle folder exists in container
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l k8s-ee/project-id={projectId} -o name | head -1) -- ls -la /app/drizzle

Schema Out of Sync

Symptoms:

  • TypeScript errors in IDE for database queries
  • Runtime errors: "column does not exist"
  • Query results missing expected fields

Resolution:

# Compare schema to actual database
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c '\d test_records'

# Generate new migration locally
pnpm db:generate --name=fix_schema

# Commit and push - new migration will run on next deployment
git add drizzle/
git commit -m "fix: add missing column migration"
git push

Seeding Failed

Symptoms:

  • App logs show "Seeding failed" errors
  • No initial data in database after deployment

Diagnosis:

# Check app startup logs
kubectl logs -n k8s-ee-pr-{number} -l k8s-ee/project-id={projectId} | grep -i seed

# Verify table exists before seeding
kubectl exec -n k8s-ee-pr-{number} -it $(kubectl get pods -n k8s-ee-pr-{number} \
  -l cnpg.io/cluster -o name | head -1) -- psql -U postgres -d app -c \
  'SELECT COUNT(*) FROM test_records;'

💡 Note: Seeding is designed to be non-blocking - the app continues even if seeding fails. Check logs but this shouldn't crash your application.


🔗 Related Pages

Clone this wiki locally