This runbook provides the day-to-day operational guidance for the theschoonover.net deployment on the Synology RackStation (DSM). It covers how to provision and update the Astro static build, manage the supporting Docker services, and ensure the site remains fast, reliable, and compliant with privacy guardrails.
| Environment | Purpose | Hosting Details | Branch | Core Services | Notes |
|---|---|---|---|---|---|
| Local Dev | Component/content work, QA before PR | Developer workstation, Node.js 20.11.x with pnpm 8.15.1 | feature branches | Astro dev server (pnpm dev) |
Use .env.local for secrets; never commit. |
| Main Preview | Auto-refresh stack for merged changes | RackStation Docker stack exposing port 8079 |
main |
site_main_preview container following main_latest, shared nginx config |
LAN/VPN only; validates integration before promotion. |
| Staging | Optional dry-run for major releases | RackStation Docker stack, staging compose profile | staging (optional) |
site static container, nginx reverse proxy |
Enable HTTP basic auth if exposed. |
| Production | Public site | RackStation Docker stack served via DSM reverse proxy at theschoonover.net |
main |
site static container, nginx reverse proxy, optional plausible |
Watchtower tracks release_* tags. |
| Variable | Description | Scope | Source |
|---|---|---|---|
SITE_URL |
Canonical site URL used for OG tags and sitemap | Build & runtime | DSM .env or GitHub Actions secret |
SMTP_HOST, SMTP_PORT, SMTP_USER, SMTP_PASS, SMTP_FROM |
Contact form SMTP relay | API container (if enabled) | DSM Secrets Manager or env file |
HCAPTCHA_SITEKEY, HCAPTCHA_SECRET |
hCaptcha keys for contact form | Frontend + API | hCaptcha dashboard |
PLAUSIBLE_DOMAIN, PLAUSIBLE_API_HOST |
Self-hosted Plausible analytics | Plausible container | Docker compose .env |
SSH_DEPLOY_KEY |
Read-only key for RackStation deploys | GitHub Actions secret | Stored as deploy key |
REGISTRY_USERNAME, REGISTRY_PASSWORD |
Credentials for docker.theschoonover.net |
GitHub Actions secret & RackStation Watchtower env | DSM Credentials Manager |
- Prep Git:
- Merge changes to
mainvia reviewed PR. - Tag release for promotion:
git tag -a release-YYYY-MM-DD -m "Release notes"thengit push --tags.
- Merge changes to
- Trigger CI/CD:
- The PR Validation workflow (
.github/workflows/pr-validation.yml) runs on every pull request, performingpnpm install, Astro type checking, and a productionpnpm buildwithout touching container registries.
- The PR Validation workflow (
- The Main Image Publish workflow (
.github/workflows/main-publish.yml) runs on pushes tomain, publishes container tags{{sha}},<sha[:8]>, andmain_latesttodocker.theschoonover.net/theschoonover/site, and feeds the preview stack. - The Release Tag Publish workflow (
.github/workflows/release.yml) fires when a GitHub release is published, confirms the matching<sha[:8]>image exists, and retags it asrelease_commit<sha[:8]>,release_latest, and the GitHub release tag for production Watchtower to promote.
- RackStation Deploy (rsync mode):
- Workflow uses
SSH_DEPLOY_KEYtorsyncdist/to the DSM Docker bind mount (e.g.,/volume1/docker/site/dist). - DSM reverse proxy serves updated static files through nginx container.
- Workflow uses
- RackStation Deploy (container mode):
- No manual build required; CI publishes the images directly to the internal registry.
- For preview validation, Watchtower tracks
main_latestand refreshes thesite_main_previewcontainer bound to port8079. - Production promotion occurs when a GitHub release is published for the desired commit; Watchtower sees the
release_*tags within five minutes and restarts thesiteservice.
Watchtower runs alongside the production stack to poll the RackStation registry for new container tags and restart services automatically. The compose file already defines the service—follow the steps below to provision credentials and verify the automation.
- Generate a Docker config with registry credentials
- SSH into the RackStation and change into the
infra/directory that holdsdocker-compose.yml. - Create a credentials folder next to the compose file:
mkdir -p watchtower-config. - Run a targeted login that stores credentials in that folder:
docker login docker.theschoonover.net --username <service-account> --password-stdin --config ./watchtower-config. - Confirm
watchtower-config/config.jsonexists and contains an auth block fordocker.theschoonover.net. Watchtower mounts this file read-only when the stack starts.
- SSH into the RackStation and change into the
- Launch the stack with Watchtower enabled
- From the
infra/directory rundocker compose up -d. - Confirm both the
siteandwatchtowerservices reportUpin DSM Docker UI or viadocker compose ps.
- From the
- Verify registry access
- Check logs with
docker compose logs -f watchtower. The watcher should logFound new site imageafter you push a new tag from CI. - If you see auth errors, rerun
./scripts/test-registry-login.shlocally to validate the credentials and ensure the RackStation trusts the registry TLS certificate.
- Check logs with
- Test an auto-update
- Push a throwaway tag (e.g.,
docker push docker.theschoonover.net/theschoonover/site:watchtower-smoke). - Update the
siteservice temporarily to point at that tag (docker compose up -d site). Within the poll interval, Watchtower should pull the canonicalrelease_latesttag and restart the container—confirm by checkingdocker compose logs sitefor the restart timestamp. - Once verified, delete the temporary tag from the registry or let Watchtower clean up old images automatically.
- Push a throwaway tag (e.g.,
Watchtower uses scoped labels on the site service, so additional containers can opt in later by applying the same com.centurylinklabs.watchtower.enable=true label with a shared scope name.
Why not an environment variable? Watchtower authenticates to private registries via the standard Docker
config.json. As long as the mounted config includes an entry fordocker.theschoonover.net, no extra environment flag is required (and none exists) to list the registry.
If you prefer to serve the Astro build without Docker, DSM Web Station can expose the exported dist/ directory as a traditional virtual host. This is useful for quickly validating a manual upload like the one performed on 2025-10-24.
- Enable Web Station and create a site root
- DSM Package Center → install/enable Web Station and Apache HTTP Server (static hosting only needs the core package).
- DSM Control Panel → Shared Folder → create (or reuse) a folder such as
/volume1/web/theschoonover-net. - Grant write access to the deploy user (the account tied to
SSH_DEPLOY_KEYif CI/CD will push the files).
- Upload the Astro build
- Run
pnpm buildlocally or in CI; upload orrsyncthe contents ofapps/site/dist/into the Web Station folder created above. Preserve the folder structure so/index.htmland/assets/land at the root.
- Run
- Create a Web Station virtual host
- Web Station → Virtual Host → Create → select Name-based.
- Set Hostname to the production domain (e.g.,
theschoonover.net) or a staging subdomain, choose the shared folder path, and pick the HTTP Back-end Server type “Static”. - If you already rely on DSM’s reverse proxy, keep it in place and point the backend to
http://127.0.0.1:<auto-port>shown in the virtual host summary.
- Wire up TLS and redirects
- DSM Control Panel → Security → Certificate → assign the Let’s Encrypt cert to the new virtual host.
- Application Portal → Reverse Proxy (or Web Station HSTS settings) → force HTTPS and enable HSTS so the static site keeps the same security posture as the containerized nginx stack.
- Verify the deployment
- Visit the hostname directly (e.g.,
https://theschoonover.net) via the internet and confirm the build hash matches the uploaded bundle. - From DSM, open Web Station → Virtual Host → click View to ensure the portal preview loads without directory listing warnings.
- If assets 404, re-run the upload with
rsync --deleteto clear stale files and confirm folder permissions inherit forhttpuser.
- Visit the hostname directly (e.g.,
To revert to the Docker-based workflow later, simply point the DSM reverse proxy back to the container and stop the Web Station virtual host—no additional cleanup is required.
Before storing or rotating the REGISTRY_USERNAME / REGISTRY_PASSWORD secrets in GitHub, validate them against the internal registry from a workstation with Docker installed:
- Copy
.env.registry.exampleto.env.registry(ignored by git) at the repo root and update the password:Replacecp .env.registry.example .env.registry $EDITOR .env.registryREGISTRY_PASSWORDwith the credential from DSM's Credential Manager. - Run the helper script to mirror the GitHub Actions
docker/login-actionstep:./scripts/test-registry-login.sh
- Confirm
Login Succeeded. If you see TLS or auth failures, verify the RackStation certificate trust chain and that the account is not locked out.
The script simply shells out to docker login using --password-stdin, so it is safe to run on macOS, Linux, or Windows Subsystem for Linux with Docker Desktop.
- Post-Deploy Verification:
- Hit
https://theschoonover.net/healthand confirm200with correct version hash. - Run Lighthouse smoke test (CI publishes report) and manual keyboard sweep.
- Check DSM reverse proxy logs for errors; confirm SSL cert valid.
- Hit
- DSM Control Panel → Application Portal → Reverse Proxy.
- Create entry
theschoonover.net→http://site:80. - Enable HSTS, HTTP → HTTPS redirect, and WebSocket support (for future features).
- Attach Let’s Encrypt certificate; configure auto-renew.
- Forwarded headers: enable
X-Forwarded-ForandX-Forwarded-Proto.
If public traffic lands on a Raspberry Pi running nginx before reaching the RackStation, configure it as an SSL-terminating reverse proxy that forwards requests to DSM:
- Copy
infra/nginx/rpi-edge.confto the Pi (e.g.,/etc/nginx/conf.d/theschoonover.net.conf). - Replace the placeholder
192.168.1.50with the RackStation’s LAN IP or DNS name. - Update the
ssl_certificatepaths to match your Let’s Encrypt (or other) certificate locations on the Pi. - Reload nginx:
sudo systemctl reload nginx.
The config enforces the same security headers as the internal RackStation nginx, preserves client IPs via the X-Forwarded-* headers, and proxies /health.json so uptime monitors continue to work end-to-end.
If you prefer Caddy for its automatic certificate management and simpler syntax, mirror the same forwarding behavior with the provided Caddyfile:
- Copy
infra/caddy/rpi-edge.caddyfileto the Pi (e.g.,/etc/caddy/Caddyfile). - Replace
192.168.1.50with the RackStation’s LAN IP or DNS name (or internal DNS record). - If you want to reuse existing certificates instead of Caddy’s built-in ACME, uncomment the
tlsstanza values and point them at your certificate and key paths. - Reload Caddy:
sudo systemctl reload caddy.
The Caddyfile maintains the same security headers, preserves client IP details, forwards /health.json explicitly for uptime checks, and enables HTTP/2 + TLS for the hop between the Pi and DSM using tls_server_name.
# Start or update stack
docker compose pull && docker compose up -d
# View logs for the site container
docker compose logs -f site
# Restart specific service
docker compose restart site- Schedule DSM Hyper Backup to copy
/volume1/docker/site(static assets),apps/site/public/downloads, and Plausible Postgres volume to external NAS or cloud bucket. - Retain 30 daily versions + 12 monthly snapshots.
- Restore desired snapshot via Hyper Backup to staging directory.
- Validate restored build (
dist/) locally usingpnpm preview. - Swap the restored directory into the production bind mount and reload nginx (
docker compose restart nginx). - Announce restore in ops channel and document incident in
docs/OPS.mdlog section.
- Stop Plausible stack:
docker compose stop plausible events-db clickhouse. - Restore Postgres and ClickHouse volumes from backup.
- Start stack:
docker compose up -d plausible events-db clickhouse. - Run Plausible integrity check:
docker compose exec plausible ./bin/plausible db check.
- Maintain previous build artifact (
dist/prev-<timestamp>) on RackStation. - To rollback, symlink nginx root to previous artifact and restart container:
ln -sfn /volume1/docker/site/dist_prev /volume1/docker/site/dist_current docker compose restart nginx
- Record rollback details (trigger, start time, completion) in incident log.
- Open follow-up issue to address root cause before redeploying latest build.
- Severity Levels:
- Sev1: Full outage or data leak.
- Sev2: Major feature broken (contact form, downloads) or security misconfiguration.
- Sev3: Minor regression (styling bug, partial analytics outage).
- Contacts:
- Primary On-Call (Infra / Agent E): infra@theschoonover.net, Signal +1-555-0100.
- Secondary (QA / Agent F): qa@theschoonover.net, Signal +1-555-0101.
- Site Owner (John Schoonover): john@theschoonover.net.
- Runbook:
- Acknowledge alert in less than 15 minutes (Uptime Kuma or manual report).
- Assess impact; log incident in DSM Notes or shared incident tracker.
- Mitigate using rollback or hotfix; capture timeline and metrics.
- Postmortem within 48 hours with action items tracked in GitHub.
- Uptime Kuma: Poll
/and/healthevery 1 minute, notify via Signal. - Plausible Dashboards: Weekly review of performance and engagement metrics documented in
docs/analytics.md. - SEO Health: Monthly crawl summary logged here with Search Console findings.
- Log Review: Weekly scan DSM nginx and application logs for anomalies.
- PR reviewed and CI green (build, lint, tests, accessibility, Lighthouse).
- README and docs updated for new workflows.
- Release notes drafted when user-facing change occurs.
- Post-deploy validation captured in ops journal.
Maintain a chronological log of significant operational events below (latest at top).
| Date | Event | Owner | Notes |
|---|---|---|---|
| 2025-10-25 | Watchtower smoke test | John Schoonover | Validated Watchtower auto-pull/restart against docker.theschoonover.net after pushing a fresh site tag; CI/CD path confirmed end-to-end. |
| 2025-10-24 | Manual static deploy | John Schoonover | Built apps/site with pnpm build and uploaded dist/ bundle to DSM Web Station share for production validation. |
| YYYY-MM-DD | Placeholder | Name | Details |