This document is the runbook for local operators and maintainers. It covers local development, containerized execution, persistence handling, deployment, and recovery.
Use:
- Node.js 24.x
- npm 11.x or newer
- Docker with Compose plugin
These versions match CI and container builds.
package.json: scripts and dependency declarationsvite.config.js: frontend dev proxy configurationDockerfile: production image buildcompose.yaml: local compose runcompose.hostinger.yaml: production compose run.drone.yml: CI/CD pipelinescripts/drone_deploy.sh: remote deployment script
Install dependencies:
npm ciDownload seed OCR files referenced by the curated corpus:
npm run download:sourcesThis step writes OCR text into public/raw/ocr/.
Start the backend:
npm run serverStart the frontend dev server in another terminal:
npm run devExpected local endpoints:
- frontend dev server:
http://localhost:5173 - backend API server:
http://localhost:8080
The Vite dev server proxies:
/api/uploads
to http://localhost:8080.
Build:
npm run buildRun:
npm run serverExpected endpoint:
http://localhost:8080
Local compose run:
docker compose up --build -dLocal compose characteristics:
- project name:
25-eva-hadox-dev - service name:
archive - container name:
25-eva-hadox-archive-dev - exposed port:
4173 - mounted persistence:
./runtime-data:/app/runtime-data
Expected endpoint:
http://localhost:4173
Stop:
docker compose downRebuild after source changes:
docker compose up --build -d-
PORT- default:
8080 - used by Express
- default:
-
DATA_DIR- default outside Docker:
<repo>/runtime-data - default inside Docker image:
/app/runtime-data - controls runtime JSON and upload storage location
- default outside Docker:
COMPOSE_PROJECT_NAME- local
.envcurrently sets25-eva-hadox-dev
- local
scripts/drone_deploy.sh accepts:
DEPLOY_PATH- default:
$HOME/eva.hadox.org
- default:
COMPOSE_FILE- default:
compose.hostinger.yaml
- default:
COMPOSE_PROJECT_NAME- default:
eva-hadox-org
- default:
The mutable state is:
runtime-data/terms.jsonruntime-data/documents.jsonruntime-data/uploads/
Back up all of runtime-data/, not just the JSON files.
tar -czf eva-runtime-data-$(date +%Y%m%d-%H%M%S).tar.gz runtime-data- Stop the application.
- Restore the archived
runtime-data/directory. - Start the application again.
Example:
docker compose down
rm -rf runtime-data
tar -xzf eva-runtime-data-20260413-120000.tar.gz
docker compose up -dAdjust the exact restore commands to match the archive structure you created.
Current validation commands:
npm run lint
npm run buildThere is currently no automated test suite. Lint and build are the main safety checks.
Drone pipeline behavior:
- clone the repository on
pushandpull_requestformainanddeploy/eva-hadox-org; - run:
npm cinpm run lintnpm run build
- on push to
deploy/eva-hadox-org, deploy via SSH to the VPS; - sync repository contents with
rsync, excluding runtime and local artifacts; - execute
scripts/drone_deploy.shon the remote machine.
Branch behavior:
mainis the open collaboration branch;deploy/eva-hadox-orgis the production deployment branch;- deployment should happen only after a reviewed promotion from
main.
EVA_DEPLOY_SSH_KEY
This must contain the private SSH key that allows the Drone runner to authenticate as deploy on the target VPS.
The current intended production deployment uses:
- source repository in Gitea;
- Drone for CI/CD;
- target VPS at
191.101.233.39; - deploy user home path
/home/deploy/eva.hadox.org; - production compose file
compose.hostinger.yaml; - internal bind
127.0.0.1:4174:8080; - deployment branch
deploy/eva-hadox-org.
Operational implication:
- the application is not directly exposed on all interfaces by the production compose file;
- a reverse proxy such as Nginx, Caddy, or another fronting service is expected to terminate public traffic and forward to
127.0.0.1:4174.
If CI is unavailable, a maintainer can deploy manually on the target host.
Typical sequence:
cd /home/deploy/eva.hadox.org
chmod +x scripts/drone_deploy.sh
./scripts/drone_deploy.shWhat the script does:
- ensures
runtime-data/uploadsexists; - removes local
distandnode_modulesto avoid stale build artifacts; - validates the compose file;
- rebuilds the
archiveservice image with--pull; - restarts the service.
Check:
- whether
runtime-data/exists; - whether
runtime-data/documents.jsonwas created; - whether
src/data/records.jsonexists and is valid JSON.
Check:
- whether
runtime-data/uploads/exists; - whether the bind mount is present in Docker;
- whether file permissions allow the app container to write into the mounted directory.
Check:
- backend is running on
localhost:8080; - Vite proxy configuration in
vite.config.js; - browser requests are using
/apirather than hardcoded ports.
Check:
- container status with
docker compose ps; - host bind
127.0.0.1:4174; - reverse proxy upstream configuration;
- firewall rules;
- whether the reverse proxy points at the expected port.
Check:
- network reachability to remote archive sources;
- correctness of
remoteUrlvalues insrc/data/records.json; - whether the remote endpoint is serving plain text as expected.
- No auth means the application should not be exposed broadly without network controls.
- Local JSON persistence has no concurrency control.
- Runtime schema changes require manual compatibility handling.
- No delete flows means moderation or cleanup is manual.
- No background processing means larger datasets will eventually affect responsiveness.
- add health checks and basic smoke tests;
- document reverse proxy configuration explicitly;
- automate runtime-data backups;
- add restore drills;
- add observability for HTTP errors and deploy failures.