GitHub Dashboard collects GitHub organization data into PostgreSQL and exposes configuration, sync controls, and analytics through a Next.js dashboard.
- Most application code is intentionally produced by AI agents.
- We prioritize tools and frameworks that AI agents can handle with confidence.
- Comprehensive automated tests are treated as essential so that AI-authored code remains trustworthy, and those tests are generated by AI agents as well.
- AI agents contribute to both engineering and design efforts across the project.
- Node 22: Use Node.js 22.x. Do not use Node 24.
- nvm example:
nvm install 22 && nvm use 22 - macOS (Homebrew):
brew update && brew install node@22- Unlink the previous version:
brew unlink node - Link 22:
brew link --overwrite --force node@22 - Verify:
node -vshould print v22.x.y
- nvm example:
- pnpm: Use pnpm 10+.
- macOS (Homebrew):
brew install pnpm
- Linux (Corepack with Node 22):
corepack enablecorepack prepare pnpm@10 --activate
- Windows (Corepack with Node 22):
corepack enablecorepack prepare pnpm@10 --activate
- macOS (Homebrew):
- Next.js: Use 15 (latest patch) for Node 22 compatibility; do not use 16.
Installed via
pnpm install. - Docker: Install Docker (Docker Desktop or Docker Engine).
- Biome CLI 2.x (Rust binary) available on your
PATH– download a release build and placebiomesomewhere executable, or compile it yourself via Cargo following the Biome documentation. - PostgreSQL 14+ running locally or reachable via connection string
- A GitHub OAuth App configured for your environments (see docs/github-oauth-app.md)
- (Optional) A GitHub personal access token with
read:userand repository metadata scopes (GITHUB_TOKEN) for legacy data collection utilities
-
Use a shared pnpm store across repos:
pnpm config set store-dir ~/.pnpm-store --global
-
Install dependencies:
pnpm install
-
(Important) Approve build scripts (required for pnpm v9+):
pnpm approve-builds
-
Install Playwright browsers (one-time per machine):
pnpm exec playwright install --with-deps- Linux (e.g., CI runners):
--with-depsalso installs required system packages so browsers run out of the box. - macOS: the flag is effectively a no-op; it only downloads the browser binaries, so leaving it on is harmless.
- Linux (e.g., CI runners):
-
Provide environment variables (
pnpm run devreads from.env.localor the current shell). Copy.env.exampleto.env.localand replace the placeholders:export GITHUB_TOKEN=<ghp_token> export GITHUB_ORG=<github_org> export GITHUB_OAUTH_CLIENT_ID=<oauth_client_id> export GITHUB_OAUTH_CLIENT_SECRET=<oauth_client_secret> export GITHUB_ALLOWED_ORG=<allowed_org_slug> export DASHBOARD_ADMIN_IDS=owner_login,ops-team export APP_BASE_URL=http://localhost:3000 # production: https://your-domain export SESSION_SECRET=$(openssl rand -hex 32) export DATABASE_URL=postgres://<user>:<password>@localhost:5432/<database> export SYNC_INTERVAL_MINUTES=60 export TODO_PROJECT_NAME="to-do list" # optional; see below for details
-
Start the dev server:
pnpm run dev
-
Visit the app (all dashboard routes require GitHub sign-in and organization membership):
http://localhost:3000— landing page with quick linkshttp://localhost:3000/dashboard— data collection controls & analyticshttp://localhost:3000/github-test— GraphQL connectivity test page
GitHub authentication is mandatory. Authorized members are issued a signed
session cookie; non-members are redirected to /auth/denied with instructions
on granting access under Settings → Applications → Authorized OAuth Apps.
Full OAuth setup instructions live in docs/github-oauth-app.md.
Long-lived sign-in uses access/idle/refresh/max session TTLs, and sensitive actions (for example, organization settings changes and cleanup endpoints) trigger a reauthentication prompt. These defaults can be adjusted in Settings → Organization.
Administrators are identified through DASHBOARD_ADMIN_IDS, a comma-separated
list of GitHub logins or node IDs. Admin users can modify organization-wide
settings (org name, sync cadence, excluded repositories/members), while all
authenticated users can adjust their personal timezone and week-start
preferences.
The first API call or dashboard render triggers schema creation (tables for
users, repositories, issues, pull requests, reviews, comments, and sync
metadata). Ensure DATABASE_URL points to a database the app can manage.
To reset the data store manually:
curl -X POST http://localhost:3000/api/sync/reset -d '{"preserveLogs":true}' \
-H "Content-Type: application/json"Use the interactive helper to apply and validate dashboard-specific indexes on
an existing dataset (ensure CREATE EXTENSION IF NOT EXISTS pg_trgm; has been
run on the target database so trigram GIN indexes can build successfully):
node scripts/db/apply-indexes.mjsFlags:
--concurrently— build each index withCREATE INDEX CONCURRENTLYto avoid long-lived locks (slower but safe for live traffic)--yes— auto-confirm prompts while still respecting default answers (optional indexes and verification queries remain skipped unless explicitly enabled)--include-optional— include optional indexes (JSONB-wide GIN indexes used for experimentation) alongside the default set
Each step prints the DDL statement, runs ANALYZE on the affected table, and
offers to execute representative EXPLAIN (ANALYZE, BUFFERS) queries. After
verifying the impact, capture the same statements in a migration or schema
update so fresh environments do not need the interactive script.
- Manual backfill — choose a start date on the dashboard or call
POST /api/sync/backfill { startDate }to fetch data up to the present. - Incremental sync — toggle auto-sync on the dashboard or call
POST /api/sync/auto { enabled: true }; it runs immediately and then everySYNC_INTERVAL_MINUTESminutes using the latest successful sync timestamp. - Status & analytics — the dashboard consumes
GET /api/sync/statusandGET /api/data/statsto present sync logs, data freshness, counts, and top contributors/repositories.
- Stuck sync cleanup — administrators can use the Sync tab’s “멈춘 동기화 정리”
button (or call
POST /api/sync/admin/cleanup) to mark lingeringrunningsync runs/logs as failed so the real-time panel clears. As a manual fallback, runUPDATE sync_runs SET status = 'failed', completed_at = NOW(), updated_at = NOW() WHERE status = 'running';(and similar forsync_log) viapsqlor a SQL client.
Manual backfill and automatic sync share the same pipeline.
-
GitHub data collection
runCollectionfetches repositories, issues, discussions, pull requests, reviews, and comments through the GraphQL API and upserts them into PostgreSQL.- Every resource writes a
running → success/failedentry tosync_log. - The latest
updated_attimestamp is stored insync_stateso the next run can reuse it as thesinceboundary.
- Every resource writes a
-
Sync metadata updates Before the run starts it creates a
sync_runrecord; on completion it refreshes thesync_configlast_sync_*fields and broadcasts Server-Sent Events (run-started,run-completed,run-failed, etc.) to the dashboard. -
Post-processing steps After
runCollectionsucceeds the sync service records a series of logged steps (logSyncStep) so each operation emits progress/failure events:- Refresh unanswered-attention reaction caches (
refreshAttentionReactions) - Apply issue status automation (
ensureIssueStatusAutomation) - Refresh the activity snapshot (
refreshActivityItemsSnapshot) - Refresh activity caches (
refreshActivityCaches) - Re-classify unanswered mentions (
runUnansweredMentionClassification)
- Refresh unanswered-attention reaction caches (
Automatic sync additionally schedules the next run, while manual backfill repeats the same steps for each day slice in the requested range.
Manual controls: Issue status automation, the activity snapshot rebuild, and the activity cache refresh all run automatically after each sync, and administrators can re-run them from the Sync tab (“진행 상태 설정”, “Activity 스냅샷 재생성”, “Activity 캐시 새로고침”) if a previous attempt failed or stale data needs to be refreshed immediately.
- Server-Sent Events —
GET /api/sync/streamkeeps an HTTP connection open (Content-Type: text/event-stream) and pushes run lifecycle updates (run-started,log-started,log-updated,run-completed,run-failed) plus periodic heartbeats. The dashboard subscribes withnew EventSource("/api/sync/stream")to surface “backfill started → resources progressing → completed” across all tabs via a shared status panel. - Events include run metadata (type, strategy, since/until window), per-resource
log status, completion summaries, and failure messages. The panel falls back
to
/api/sync/statusfor initial hydration and whenever the SSE connection re-opens. - No extra libraries are required—the server uses the built-in Next.js App
Router streaming response API, and the browser relies on native
EventSourcewith automatic reconnection. Expect a single SSE connection per browser tab (~50 concurrent clients are well within the Node runtime budget).
The Activity view’s advanced filters now distinguish between applied users and people who are only conditionally relevant to the selected attention type.
- When you combine a 주의 type with one or more 구성원, only the roles that matter for that attention are sent to the API (e.g., 정체된 Backlog 이슈 → maintainerIds). Other people fields keep the same users as gray chips but stay out of the query.
- Gray chips represent a “selected but conditional” state. They persist when you reload or restore a saved filter, but they do not affect the query until you remove them or reselect them to promote them back into blue chips.
- Clearing the 주의 filter removes every gray chip and returns the people filters to the regular applied state (blue chips).
- Even when you pick multiple 구성원, each person contributes only the roles they satisfy, combined with OR logic, so the team stays focused on items they truly own.
- While a 주의 type and 구성원 are both selected, the six advanced 사람 필터 inputs become read-only with a helper message; the dashboard manages their chips automatically.
- Role mapping by attention:
- 정체된 Backlog 이슈 →
maintainerIdsblue, all other 사람 필터 fields empty. - 정체된 In Progress 이슈 →
assigneeIdsblue,authorIdsgray, the remaining four 사람 필터 fields empty. - 리뷰어 미지정 PR →
maintainerIds,authorIdsblue, other 사람 필터 fields empty. - 리뷰 정체 PR →
maintainerIds,reviewerIdsblue, other 사람 필터 fields empty. - 머지 지연 PR →
assigneeIds(if any) elsemaintainerIdsblue, other 사람 필터 fields empty. - 응답 없는 리뷰 요청 →
reviewerIdsblue, all other 사람 필터 fields empty. - 응답 없는 멘션 →
mentionedUserIdsblue, all other 사람 필터 fields empty. - Conflicting attentions (예: 정체된 Backlog 이슈 + 응답 없는 멘션) degrade the overlapping 사람 필터 roles to gray chips so the UI reflects that the selection is optional for that combination.
- 정체된 Backlog 이슈 →
If you want issue status, priority, start date, or other metadata to mirror a
specific GitHub Projects (beta) board, set TODO_PROJECT_NAME to the project’s
name (case-insensitive match). When provided, the sync pipeline will:
- import project status history so dashboard filters (for example
Todo,In Progress) reflect the project board - lock issue statuses that are managed by the project and surface project field values (priority, start date, etc.) in the UI
Leaving TODO_PROJECT_NAME unset (or blank) is also valid. In that case the
dashboard relies solely on statuses stored in the local database—manual updates
made from the dashboard continue to work, while project-driven fields and locks
are simply disabled. This is useful if you track issue progress entirely inside
the dashboard or maintain multiple GitHub projects and only want dashboard-side
state.
- The dashboard treats weekends as non-working time across all business-day and business-hour calculations.
- Organization-wide metrics (예: Activity/Follow-ups의 “정체된 Backlog/In
Progress 이슈”, Activity/Follow-ups의 “리뷰어 미지정 PR”, “리뷰 정체 PR”, “머지 지연 PR”,
Analytics/People의
평균 해결/작업 시간) rely on the organization holiday calendar codes
configured in
sync_config. Those dates are combined with weekends to form the working calendar. - User-specific wait metrics (Activity/Follow-ups의 “응답 없는 리뷰 요청”, “응답 없는 멘션”, Analytics/People의 리뷰 응답 시간) first try to load the reviewer’s personal holiday preferences and time-off entries. When a person supplies their own calendar or personal days, only those dates (plus weekends) are excluded from response-time calculations.
- If a person has not configured any personal calendar codes, the system falls back to the organization holiday set so that users without preferences still benefit from holiday-aware metrics.
- Personal time entries expand multi-day ranges into individual dates, ensuring that longer vacations are excluded consistently without requiring day-by-day entries.
The dashboard includes a lightweight backup scheduler so operators can snapshot the PostgreSQL database without leaving the UI.
Set the following environment variables alongside DATABASE_URL:
| Variable | Description | Default |
|---|---|---|
DB_BACKUP_DIRECTORY |
Absolute path where .dump files are written. Must be writable by the dashboard process. |
backups in the project root |
DB_BACKUP_RETENTION |
Maximum number of successful backups to keep. Older dumps (and their metadata) are deleted automatically. | 3 |
The scheduler reads the user-configured hour and timezone from sync_config.
Administrators can adjust the schedule from Dashboard → 동기화 → 백업 or via
PATCH /api/sync/config { "backupHour": number }.
-
A record is inserted into
db_backupswith statusrunning. -
pg_dump --format=customwrites a file nameddb-backup-YYYYMMDD-HHMMSS.dumpunderDB_BACKUP_DIRECTORY. -
On success the record is updated to
success, the size is stored, and a metadata sidecar file (.meta.json) is written next to the dump, for example:{ "status": "success", "trigger": "manual", "startedAt": "2025-10-25T22:57:59.000Z", "completedAt": "2025-10-25T22:58:02.000Z", "sizeBytes": 675028992, "createdBy": "admin-user" }The metadata makes it easy to verify a dump even if you move it to another machine—the UI reads the sidecar and falls back to filesystem timestamps if it is missing.
-
When retention is exceeded both the dump and sidecar are removed.
If pg_dump fails the record becomes failed, the dump is left in place for
inspection, and no metadata file is written.
Use 복구 in the backup panel or call
POST /api/backup/<restoreKey>/restore. The handler:
- Drops and recreates the
publicschema. - Runs
pg_restore --clean --if-existswith the selected dump. - Marks the record as restored and refreshes the scheduler.
Because the schema is rebuilt during restore, ensure you do not have unsaved
changes in the target database. Copy both the .dump and .meta.json files if
you need to move backups between environments.
Backups and sync runs share a cooperative locking system:
- Backups acquire a
withJobLock("backup")lock for the duration ofpg_dump. The job waits if a sync run is still in progress. As soon as the sync run finishes the queued backup starts automatically. - Sync runs acquire
withJobLock("sync")(inside the sync pipeline). When a backup is in progress, new sync runs wait for the backup to finish. - Restore operations use a separate lock
withJobLock("restore"). While a restore is running, both sync runs and new backups will wait until the restore completes.
This guarantees only one of the long-running jobs (backup, restore, sync) touches
the database at a time, preventing pg_dump from racing with writes. See
What each sync run does for the full pipeline—GitHub
data collection and the subsequent post-processing steps (automation, snapshot,
caches) run sequentially. Administrators can manually re-run automation and cache
refresh from the Sync tab if needed (the snapshot remains automatic-only).
pnpm run lint— Biome linting (Rust binary via CI and local pnpm package)pnpm run format— Biome formatter (writes changes)biome ci --error-on-warnings .— direct CLI run that combines Biome's formatter and linter checks without writing files; it subsumespnpm run lint(lint-only wrapper) while differing frompnpm run format, which applies formatting edits instead of reporting thempnpm run typecheck—tsc --noEmitpnpm run test— Vitest unit and component testspnpm run test:db— PostgreSQL integration suite scoped byvitest.db.config.tsto*.db.test.tsspecs; each spec importstests/helpers/postgres-containerto launch a disposable PostgreSQL 16 Testcontainer, injects its connection URI intoDATABASE_URL, runsensureSchema()so tables exist, and stops the container once the suite finishes. Ensure Docker (or Colima on macOS) is running first, and keep each spec responsible for cleaning its tables (for example withTRUNCATE) to stay isolated.pnpm run test:watch— watch modepnpm run test:e2e— Playwright browser tests (requires the Playwright browser install step above); uses dedicated test harness routes under/test-harness/*such as:- SettingsView —
http://localhost:3000/test-harness/settings - SyncControls —
http://localhost:3000/test-harness/sync - Session bootstrap —
http://localhost:3000/test-harness/auth/session - Analytics filters —
http://localhost:3000/test-harness/analytics - People insights —
http://localhost:3000/test-harness/people - Dashboard tabs —
http://localhost:3000/test-harness/dashboard-tabs
- SettingsView —
pnpm run ci— sequentially runsbiome ci --error-on-warnings .,pnpm run typecheck,pnpm run test, andpnpm run test:db
The GitHub Actions workflow (.github/workflows/ci.yml) starts a Postgres 16
service, generates an ephemeral SESSION_SECRET, and expects the following
repository secrets to be configured:
OAUTH_CLIENT_IDOAUTH_CLIENT_SECRETOAUTH_ALLOWED_ORG
These values are used during end-to-end tests to exercise the GitHub OAuth flow. Update them per environment as needed.
Builds use the standalone Next.js output for small production images. The base
Node.js image in Dockerfile is pinned by digest to ensure reproducible builds.
cp .env.example .env # if you need a starting point
vim .env # set GitHub OAuth creds, DATABASE_URL, etc.
./infra/nginx/certs/generate.sh
docker compose up --buildDocker Compose reads environment values from .env in the project root. This
file is distinct from .env.local, which is used only by the local Next.js dev
server.
When deploying to a linux/amd64 host (for example, from an Apple Silicon
workstation), build the image with docker buildx, verify it behind the HTTPS
proxy, and ship it as a tarball.
-
Build for the target platform and tag the image:
docker buildx build --platform linux/amd64 -t github-dashboard:0.2.0 --load .Replace
github-dashboard:0.2.0with the tag you plan to deploy. The command automatically uses your default builder (create one withdocker buildx create --useif missing).--loadpulls the built image into the local Docker daemon so the tar export in the next step works. -
Optionally smoke-test locally over HTTPS:
./infra/nginx/certs/generate.sh # creates local.crt/local.key for localhost docker compose up --buildVisit
https://localhost(accept the self-signed certificate if prompted), then stop the stack withdocker compose down. ProvideGITHUB_TOKENand other secrets via.envbefore running. -
Export the image to a tarball and transfer it to the server (copy your production TLS certificate and key alongside the tarball, or re-run the generation script there with the appropriate host names):
docker save github-dashboard:0.2.0 -o github-dashboard-0.2.0.tar scp github-dashboard-0.2.0.tar user@server:/path/to/github-dashboard/
The bundled script issues certificates for
localhost; replaceinfra/nginx/certs/local.crtandlocal.keywith files signed for your real domain before serving traffic publicly. -
On the server, load the tarball, install the HTTPS certificate, and restart the containers (adjust the commands to your setup):
docker compose down docker load -i /path/to/github-dashboard/github-dashboard-0.2.0.tar docker compose up -d --force-recreate
Ensure
/path/to/github-dashboard/infra/nginx/certs/local.crtand/path/to/github-dashboard/infra/nginx/certs/local.keycontain the certificate and key signed for your server before restarting the stack. The--force-recreateflag tears down and rebuilds all containers even when the configuration or images have not changed, guaranteeing that the freshly loaded image is used.If you manage the container manually, use
docker stop <container>followed bydocker run ... github-dashboard:0.2.0instead of the Compose commands.
The nginx proxy listens only on HTTPS (https://localhost) and redirects any
HTTP attempts to the secure endpoint.
- Node app: internal on port 3000 (reachable via the proxy only)
- nginx proxy: exposes port 443 (HTTPS) and forwards traffic to the app
container. Certificates live in
infra/nginx/certs/.
Docker Compose keeps the application and nginx proxy definitions in
docker-compose.yml, letting you run the full stack with a single command:
services:
app:
build: .
env_file:
- .env
environment:
NODE_ENV: production
PORT: 3000
HOSTNAME: 0.0.0.0
GITHUB_TOKEN: ${GITHUB_TOKEN:-}
GITHUB_ORG: ${GITHUB_ORG:-}
DATABASE_URL: ${DATABASE_URL}
restart: unless-stopped
proxy:
image: nginx:1.28.0
depends_on:
- app
ports:
- "443:443"
volumes:
- ./infra/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
- ./infra/nginx/certs:/etc/nginx/certs:ro
restart: unless-stopped- Copy
.env, TLS assets (infra/nginx/certs/*), and either the source tree or a pre-built image tarball to the server. - Run
docker compose up -d --buildthe first time (or--force-recreateafter loading a new tarball) to start both services. Compose injects the environment values from.envand mounts the nginx configuration automatically.
If you prefer to manage containers manually, mirror the compose setup by starting the app and proxy containers yourself:
-
Load the image and create a shared network:
docker load -i github-dashboard-0.2.0.tar docker network create github-dashboard-net || true
-
Launch the application container with the required environment variables:
docker stop github-dashboard-app 2>/dev/null || true docker rm github-dashboard-app 2>/dev/null || true docker run -d \ --name github-dashboard-app \ --env-file .env \ --network github-dashboard-net \ github-dashboard:0.2.0
-
Start the nginx proxy container and expose HTTPS:
docker stop github-dashboard-proxy 2>/dev/null || true docker rm github-dashboard-proxy 2>/dev/null || true docker run -d \ --name github-dashboard-proxy \ --network github-dashboard-net \ -p 443:443 \ -v $(pwd)/infra/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro \ -v $(pwd)/infra/nginx/certs:/etc/nginx/certs:ro \ nginx:1.28.0
Maintain the same .env file (or equivalent secrets store) for both containers
and restart them with docker restart <name> when shipping updates.
src/app/ → Next.js App Router routes, layouts, and API handlers
src/components/ → Shared UI components (shadcn/ui + custom)
src/lib/ → Utilities (db, auth, GitHub client, sync scheduler)
src/lib/auth/ → GitHub OAuth, session, and membership helpers
docs/ → Setup guides (e.g., GitHub OAuth registration)
tests/ → Vitest specs, Playwright E2E suites, and helpers
public/ → Static assets served by Next.js
infra/ → Docker/nginx assets for HTTPS proxying
scripts/backfill-social-signals.ts— one-off helper to rebuild the activity snapshot and social-signal caches. Run it when you want a full refresh outside the usual sync pipeline; add--signals-onlyto rebuild only the social-signal tables.scripts/backfill-node-ownership.ts— fixes “card shows repo A but link opens repo B” situations by re-fetching the canonical repository/URL for each affected issue or discussion and upserting it back into PostgreSQL. The default invocation (pnpm run backfill:ownership) processes up to 500 candidates per run; prepend--dry-runto log what would change without writing, pass--limit <n>or--chunk-size <n>to tune throughput, and use--id <nodeID>(repeatable) to inspect or repair specific GitHub nodes.scripts/db/backup.mjs— wrapspg_dumpso you can produce database backups with a single command. Supports--dir,--label,--format, and--pg-dumpflags for customizing the output location and filename.scripts/run-db-tests.mjs— verifies that PostgreSQL (or Testcontainers) is available, then runsvitestwithvitest.db.config.ts. If no database is reachable it exits early, letting CI/local workflows skip DB-backed tests.
Environment variables are parsed through src/lib/env.ts:
| Variable | Required | Description |
|---|---|---|
GITHUB_TOKEN |
✅ | GitHub token with read:user + repository metadata scope |
GITHUB_ORG |
✅ | Organization login to target for data collection |
GITHUB_OAUTH_CLIENT_ID |
✅ | GitHub OAuth App client identifier |
GITHUB_OAUTH_CLIENT_SECRET |
✅ | GitHub OAuth App client secret |
GITHUB_ALLOWED_ORG |
✅ | GitHub organization slug allowed to sign in; non-admin users stay blocked until allowed teams or members are configured in Settings |
GITHUB_ALLOWED_BOT_LOGINS |
⛔ | Comma-separated GitHub logins that may sign in even if they are not organization members (for example octoaide) |
DASHBOARD_ADMIN_IDS |
⛔ | Comma-separated GitHub logins or node IDs with admin privileges |
APP_BASE_URL |
✅ | Absolute origin used to build OAuth callback URLs |
SESSION_SECRET |
✅ | Secret key for signing session cookies |
DATABASE_URL |
✅ | PostgreSQL connection string |
SYNC_INTERVAL_MINUTES |
⛔ (default 60) | Interval for automatic incremental sync |
TODO_PROJECT_NAME |
⛔ | Optional GitHub Projects board name to mirror issue metadata |
Define them in .env.local for local development or provide them via your
hosting platform. Docker Compose reads from .env in the project root.