Skip to content

aicers/github-dashboard

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

GitHub Dashboard

GitHub Dashboard collects GitHub organization data into PostgreSQL and exposes configuration, sync controls, and analytics through a Next.js dashboard.

AI-Driven Development Principles

  • Most application code is intentionally produced by AI agents.
  • We prioritize tools and frameworks that AI agents can handle with confidence.
  • Comprehensive automated tests are treated as essential so that AI-authored code remains trustworthy, and those tests are generated by AI agents as well.
  • AI agents contribute to both engineering and design efforts across the project.

Prerequisites

  • Node 22: Use Node.js 22.x. Do not use Node 24.
    • nvm example: nvm install 22 && nvm use 22
    • macOS (Homebrew):
      • brew update && brew install node@22
      • Unlink the previous version: brew unlink node
      • Link 22: brew link --overwrite --force node@22
      • Verify: node -v should print v22.x.y
  • pnpm: Use pnpm 10+.
    • macOS (Homebrew):
      • brew install pnpm
    • Linux (Corepack with Node 22):
      • corepack enable
      • corepack prepare pnpm@10 --activate
    • Windows (Corepack with Node 22):
      • corepack enable
      • corepack prepare pnpm@10 --activate
  • Next.js: Use 15 (latest patch) for Node 22 compatibility; do not use 16. Installed via pnpm install.
  • Docker: Install Docker (Docker Desktop or Docker Engine).
  • Biome CLI 2.x (Rust binary) available on your PATH – download a release build and place biome somewhere executable, or compile it yourself via Cargo following the Biome documentation.
  • PostgreSQL 14+ running locally or reachable via connection string
  • A GitHub OAuth App configured for your environments (see docs/github-oauth-app.md)
  • (Optional) A GitHub personal access token with read:user and repository metadata scopes (GITHUB_TOKEN) for legacy data collection utilities

Local Development

  1. Use a shared pnpm store across repos:

    pnpm config set store-dir ~/.pnpm-store --global
  2. Install dependencies:

    pnpm install
  3. (Important) Approve build scripts (required for pnpm v9+):

    pnpm approve-builds
  4. Install Playwright browsers (one-time per machine):

    pnpm exec playwright install --with-deps
    • Linux (e.g., CI runners): --with-deps also installs required system packages so browsers run out of the box.
    • macOS: the flag is effectively a no-op; it only downloads the browser binaries, so leaving it on is harmless.
  5. Provide environment variables (pnpm run dev reads from .env.local or the current shell). Copy .env.example to .env.local and replace the placeholders:

    export GITHUB_TOKEN=<ghp_token>
    export GITHUB_ORG=<github_org>
    export GITHUB_OAUTH_CLIENT_ID=<oauth_client_id>
    export GITHUB_OAUTH_CLIENT_SECRET=<oauth_client_secret>
    export GITHUB_ALLOWED_ORG=<allowed_org_slug>
    export DASHBOARD_ADMIN_IDS=owner_login,ops-team
    export APP_BASE_URL=http://localhost:3000   # production: https://your-domain
    export SESSION_SECRET=$(openssl rand -hex 32)
    export DATABASE_URL=postgres://<user>:<password>@localhost:5432/<database>
    export SYNC_INTERVAL_MINUTES=60
    export TODO_PROJECT_NAME="to-do list"   # optional; see below for details
  6. Start the dev server:

    pnpm run dev
  7. Visit the app (all dashboard routes require GitHub sign-in and organization membership):

    • http://localhost:3000 — landing page with quick links
    • http://localhost:3000/dashboard — data collection controls & analytics
    • http://localhost:3000/github-test — GraphQL connectivity test page

GitHub authentication is mandatory. Authorized members are issued a signed session cookie; non-members are redirected to /auth/denied with instructions on granting access under Settings → Applications → Authorized OAuth Apps. Full OAuth setup instructions live in docs/github-oauth-app.md.

Long-lived sign-in uses access/idle/refresh/max session TTLs, and sensitive actions (for example, organization settings changes and cleanup endpoints) trigger a reauthentication prompt. These defaults can be adjusted in Settings → Organization.

Administrators are identified through DASHBOARD_ADMIN_IDS, a comma-separated list of GitHub logins or node IDs. Admin users can modify organization-wide settings (org name, sync cadence, excluded repositories/members), while all authenticated users can adjust their personal timezone and week-start preferences.

PostgreSQL schema bootstrap

The first API call or dashboard render triggers schema creation (tables for users, repositories, issues, pull requests, reviews, comments, and sync metadata). Ensure DATABASE_URL points to a database the app can manage.

To reset the data store manually:

curl -X POST http://localhost:3000/api/sync/reset -d '{"preserveLogs":true}' \
  -H "Content-Type: application/json"

Index maintenance script

Use the interactive helper to apply and validate dashboard-specific indexes on an existing dataset (ensure CREATE EXTENSION IF NOT EXISTS pg_trgm; has been run on the target database so trigram GIN indexes can build successfully):

node scripts/db/apply-indexes.mjs

Flags:

  • --concurrently — build each index with CREATE INDEX CONCURRENTLY to avoid long-lived locks (slower but safe for live traffic)
  • --yes — auto-confirm prompts while still respecting default answers (optional indexes and verification queries remain skipped unless explicitly enabled)
  • --include-optional — include optional indexes (JSONB-wide GIN indexes used for experimentation) alongside the default set

Each step prints the DDL statement, runs ANALYZE on the affected table, and offers to execute representative EXPLAIN (ANALYZE, BUFFERS) queries. After verifying the impact, capture the same statements in a migration or schema update so fresh environments do not need the interactive script.

Data collection flows

  • Manual backfill — choose a start date on the dashboard or call POST /api/sync/backfill { startDate } to fetch data up to the present.
  • Incremental sync — toggle auto-sync on the dashboard or call POST /api/sync/auto { enabled: true }; it runs immediately and then every SYNC_INTERVAL_MINUTES minutes using the latest successful sync timestamp.
  • Status & analytics — the dashboard consumes GET /api/sync/status and GET /api/data/stats to present sync logs, data freshness, counts, and top contributors/repositories.
  • Stuck sync cleanup — administrators can use the Sync tab’s “멈춘 동기화 정리” button (or call POST /api/sync/admin/cleanup) to mark lingering running sync runs/logs as failed so the real-time panel clears. As a manual fallback, run UPDATE sync_runs SET status = 'failed', completed_at = NOW(), updated_at = NOW() WHERE status = 'running'; (and similar for sync_log) via psql or a SQL client.

What each sync run does

Manual backfill and automatic sync share the same pipeline.

  1. GitHub data collection runCollection fetches repositories, issues, discussions, pull requests, reviews, and comments through the GraphQL API and upserts them into PostgreSQL.

    • Every resource writes a running → success/failed entry to sync_log.
    • The latest updated_at timestamp is stored in sync_state so the next run can reuse it as the since boundary.
  2. Sync metadata updates Before the run starts it creates a sync_run record; on completion it refreshes the sync_config last_sync_* fields and broadcasts Server-Sent Events (run-started, run-completed, run-failed, etc.) to the dashboard.

  3. Post-processing steps After runCollection succeeds the sync service records a series of logged steps (logSyncStep) so each operation emits progress/failure events:

    • Refresh unanswered-attention reaction caches (refreshAttentionReactions)
    • Apply issue status automation (ensureIssueStatusAutomation)
    • Refresh the activity snapshot (refreshActivityItemsSnapshot)
    • Refresh activity caches (refreshActivityCaches)
    • Re-classify unanswered mentions (runUnansweredMentionClassification)

Automatic sync additionally schedules the next run, while manual backfill repeats the same steps for each day slice in the requested range.

Manual controls: Issue status automation, the activity snapshot rebuild, and the activity cache refresh all run automatically after each sync, and administrators can re-run them from the Sync tab (“진행 상태 설정”, “Activity 스냅샷 재생성”, “Activity 캐시 새로고침”) if a previous attempt failed or stale data needs to be refreshed immediately.

Real-time sync stream

  • Server-Sent EventsGET /api/sync/stream keeps an HTTP connection open (Content-Type: text/event-stream) and pushes run lifecycle updates (run-started, log-started, log-updated, run-completed, run-failed) plus periodic heartbeats. The dashboard subscribes with new EventSource("/api/sync/stream") to surface “backfill started → resources progressing → completed” across all tabs via a shared status panel.
  • Events include run metadata (type, strategy, since/until window), per-resource log status, completion summaries, and failure messages. The panel falls back to /api/sync/status for initial hydration and whenever the SSE connection re-opens.
  • No extra libraries are required—the server uses the built-in Next.js App Router streaming response API, and the browser relies on native EventSource with automatic reconnection. Expect a single SSE connection per browser tab (~50 concurrent clients are well within the Node runtime budget).

Activity filters and optional people chips

The Activity view’s advanced filters now distinguish between applied users and people who are only conditionally relevant to the selected attention type.

  • When you combine a 주의 type with one or more 구성원, only the roles that matter for that attention are sent to the API (e.g., 정체된 Backlog 이슈 → maintainerIds). Other people fields keep the same users as gray chips but stay out of the query.
  • Gray chips represent a “selected but conditional” state. They persist when you reload or restore a saved filter, but they do not affect the query until you remove them or reselect them to promote them back into blue chips.
  • Clearing the 주의 filter removes every gray chip and returns the people filters to the regular applied state (blue chips).
  • Even when you pick multiple 구성원, each person contributes only the roles they satisfy, combined with OR logic, so the team stays focused on items they truly own.
  • While a 주의 type and 구성원 are both selected, the six advanced 사람 필터 inputs become read-only with a helper message; the dashboard manages their chips automatically.
  • Role mapping by attention:
    • 정체된 Backlog 이슈 → maintainerIds blue, all other 사람 필터 fields empty.
    • 정체된 In Progress 이슈 → assigneeIds blue, authorIds gray, the remaining four 사람 필터 fields empty.
    • 리뷰어 미지정 PR → maintainerIds, authorIds blue, other 사람 필터 fields empty.
    • 리뷰 정체 PR → maintainerIds, reviewerIds blue, other 사람 필터 fields empty.
    • 머지 지연 PR → assigneeIds (if any) else maintainerIds blue, other 사람 필터 fields empty.
    • 응답 없는 리뷰 요청 → reviewerIds blue, all other 사람 필터 fields empty.
    • 응답 없는 멘션 → mentionedUserIds blue, all other 사람 필터 fields empty.
    • Conflicting attentions (예: 정체된 Backlog 이슈 + 응답 없는 멘션) degrade the overlapping 사람 필터 roles to gray chips so the UI reflects that the selection is optional for that combination.

Optional GitHub to-do project integration

If you want issue status, priority, start date, or other metadata to mirror a specific GitHub Projects (beta) board, set TODO_PROJECT_NAME to the project’s name (case-insensitive match). When provided, the sync pipeline will:

  • import project status history so dashboard filters (for example Todo, In Progress) reflect the project board
  • lock issue statuses that are managed by the project and surface project field values (priority, start date, etc.) in the UI

Leaving TODO_PROJECT_NAME unset (or blank) is also valid. In that case the dashboard relies solely on statuses stored in the local database—manual updates made from the dashboard continue to work, while project-driven fields and locks are simply disabled. This is useful if you track issue progress entirely inside the dashboard or maintain multiple GitHub projects and only want dashboard-side state.

Business time & holiday handling

  • The dashboard treats weekends as non-working time across all business-day and business-hour calculations.
  • Organization-wide metrics (예: Activity/Follow-ups의 “정체된 Backlog/In Progress 이슈”, Activity/Follow-ups의 “리뷰어 미지정 PR”, “리뷰 정체 PR”, “머지 지연 PR”, Analytics/People의 평균 해결/작업 시간) rely on the organization holiday calendar codes configured in sync_config. Those dates are combined with weekends to form the working calendar.
  • User-specific wait metrics (Activity/Follow-ups의 “응답 없는 리뷰 요청”, “응답 없는 멘션”, Analytics/People의 리뷰 응답 시간) first try to load the reviewer’s personal holiday preferences and time-off entries. When a person supplies their own calendar or personal days, only those dates (plus weekends) are excluded from response-time calculations.
  • If a person has not configured any personal calendar codes, the system falls back to the organization holiday set so that users without preferences still benefit from holiday-aware metrics.
  • Personal time entries expand multi-day ranges into individual dates, ensuring that longer vacations are excluded consistently without requiring day-by-day entries.

Database backups

The dashboard includes a lightweight backup scheduler so operators can snapshot the PostgreSQL database without leaving the UI.

Configuration

Set the following environment variables alongside DATABASE_URL:

Variable Description Default
DB_BACKUP_DIRECTORY Absolute path where .dump files are written. Must be writable by the dashboard process. backups in the project root
DB_BACKUP_RETENTION Maximum number of successful backups to keep. Older dumps (and their metadata) are deleted automatically. 3

The scheduler reads the user-configured hour and timezone from sync_config. Administrators can adjust the schedule from Dashboard → 동기화 → 백업 or via PATCH /api/sync/config { "backupHour": number }.

What happens during a backup

  1. A record is inserted into db_backups with status running.

  2. pg_dump --format=custom writes a file named db-backup-YYYYMMDD-HHMMSS.dump under DB_BACKUP_DIRECTORY.

  3. On success the record is updated to success, the size is stored, and a metadata sidecar file (.meta.json) is written next to the dump, for example:

    {
      "status": "success",
      "trigger": "manual",
      "startedAt": "2025-10-25T22:57:59.000Z",
      "completedAt": "2025-10-25T22:58:02.000Z",
      "sizeBytes": 675028992,
      "createdBy": "admin-user"
    }

    The metadata makes it easy to verify a dump even if you move it to another machine—the UI reads the sidecar and falls back to filesystem timestamps if it is missing.

  4. When retention is exceeded both the dump and sidecar are removed.

If pg_dump fails the record becomes failed, the dump is left in place for inspection, and no metadata file is written.

Restoring a backup

Use 복구 in the backup panel or call POST /api/backup/<restoreKey>/restore. The handler:

  1. Drops and recreates the public schema.
  2. Runs pg_restore --clean --if-exists with the selected dump.
  3. Marks the record as restored and refreshes the scheduler.

Because the schema is rebuilt during restore, ensure you do not have unsaved changes in the target database. Copy both the .dump and .meta.json files if you need to move backups between environments.

Concurrency with sync runs

Backups and sync runs share a cooperative locking system:

  • Backups acquire a withJobLock("backup") lock for the duration of pg_dump. The job waits if a sync run is still in progress. As soon as the sync run finishes the queued backup starts automatically.
  • Sync runs acquire withJobLock("sync") (inside the sync pipeline). When a backup is in progress, new sync runs wait for the backup to finish.
  • Restore operations use a separate lock withJobLock("restore"). While a restore is running, both sync runs and new backups will wait until the restore completes.

This guarantees only one of the long-running jobs (backup, restore, sync) touches the database at a time, preventing pg_dump from racing with writes. See What each sync run does for the full pipeline—GitHub data collection and the subsequent post-processing steps (automation, snapshot, caches) run sequentially. Administrators can manually re-run automation and cache refresh from the Sync tab if needed (the snapshot remains automatic-only).

Quality Tooling

  • pnpm run lint — Biome linting (Rust binary via CI and local pnpm package)
  • pnpm run format — Biome formatter (writes changes)
  • biome ci --error-on-warnings . — direct CLI run that combines Biome's formatter and linter checks without writing files; it subsumes pnpm run lint (lint-only wrapper) while differing from pnpm run format, which applies formatting edits instead of reporting them
  • pnpm run typechecktsc --noEmit
  • pnpm run test — Vitest unit and component tests
  • pnpm run test:db — PostgreSQL integration suite scoped by vitest.db.config.ts to *.db.test.ts specs; each spec imports tests/helpers/postgres-container to launch a disposable PostgreSQL 16 Testcontainer, injects its connection URI into DATABASE_URL, runs ensureSchema() so tables exist, and stops the container once the suite finishes. Ensure Docker (or Colima on macOS) is running first, and keep each spec responsible for cleaning its tables (for example with TRUNCATE) to stay isolated.
  • pnpm run test:watch — watch mode
  • pnpm run test:e2e — Playwright browser tests (requires the Playwright browser install step above); uses dedicated test harness routes under /test-harness/* such as:
    • SettingsView — http://localhost:3000/test-harness/settings
    • SyncControls — http://localhost:3000/test-harness/sync
    • Session bootstrap — http://localhost:3000/test-harness/auth/session
    • Analytics filters — http://localhost:3000/test-harness/analytics
    • People insights — http://localhost:3000/test-harness/people
    • Dashboard tabs — http://localhost:3000/test-harness/dashboard-tabs
  • pnpm run ci — sequentially runs biome ci --error-on-warnings ., pnpm run typecheck, pnpm run test, and pnpm run test:db

Continuous Integration

The GitHub Actions workflow (.github/workflows/ci.yml) starts a Postgres 16 service, generates an ephemeral SESSION_SECRET, and expects the following repository secrets to be configured:

  • OAUTH_CLIENT_ID
  • OAUTH_CLIENT_SECRET
  • OAUTH_ALLOWED_ORG

These values are used during end-to-end tests to exercise the GitHub OAuth flow. Update them per environment as needed.

Docker

Builds use the standalone Next.js output for small production images. The base Node.js image in Dockerfile is pinned by digest to ensure reproducible builds.

cp .env.example .env          # if you need a starting point
vim .env                      # set GitHub OAuth creds, DATABASE_URL, etc.
./infra/nginx/certs/generate.sh
docker compose up --build

Docker Compose reads environment values from .env in the project root. This file is distinct from .env.local, which is used only by the local Next.js dev server.

Build & export a production image

When deploying to a linux/amd64 host (for example, from an Apple Silicon workstation), build the image with docker buildx, verify it behind the HTTPS proxy, and ship it as a tarball.

  1. Build for the target platform and tag the image:

    docker buildx build --platform linux/amd64 -t github-dashboard:0.2.0 --load .

    Replace github-dashboard:0.2.0 with the tag you plan to deploy. The command automatically uses your default builder (create one with docker buildx create --use if missing). --load pulls the built image into the local Docker daemon so the tar export in the next step works.

  2. Optionally smoke-test locally over HTTPS:

    ./infra/nginx/certs/generate.sh   # creates local.crt/local.key for localhost
    docker compose up --build

    Visit https://localhost (accept the self-signed certificate if prompted), then stop the stack with docker compose down. Provide GITHUB_TOKEN and other secrets via .env before running.

  3. Export the image to a tarball and transfer it to the server (copy your production TLS certificate and key alongside the tarball, or re-run the generation script there with the appropriate host names):

    docker save github-dashboard:0.2.0 -o github-dashboard-0.2.0.tar
    scp github-dashboard-0.2.0.tar user@server:/path/to/github-dashboard/

    The bundled script issues certificates for localhost; replace infra/nginx/certs/local.crt and local.key with files signed for your real domain before serving traffic publicly.

  4. On the server, load the tarball, install the HTTPS certificate, and restart the containers (adjust the commands to your setup):

    docker compose down
    docker load -i /path/to/github-dashboard/github-dashboard-0.2.0.tar
    docker compose up -d --force-recreate

    Ensure /path/to/github-dashboard/infra/nginx/certs/local.crt and /path/to/github-dashboard/infra/nginx/certs/local.key contain the certificate and key signed for your server before restarting the stack. The --force-recreate flag tears down and rebuilds all containers even when the configuration or images have not changed, guaranteeing that the freshly loaded image is used.

    If you manage the container manually, use docker stop <container> followed by docker run ... github-dashboard:0.2.0 instead of the Compose commands.

The nginx proxy listens only on HTTPS (https://localhost) and redirects any HTTP attempts to the secure endpoint.

  • Node app: internal on port 3000 (reachable via the proxy only)
  • nginx proxy: exposes port 443 (HTTPS) and forwards traffic to the app container. Certificates live in infra/nginx/certs/.

Deployment modes

Using Docker Compose

Docker Compose keeps the application and nginx proxy definitions in docker-compose.yml, letting you run the full stack with a single command:

services:
  app:
    build: .
    env_file:
      - .env
    environment:
      NODE_ENV: production
      PORT: 3000
      HOSTNAME: 0.0.0.0
      GITHUB_TOKEN: ${GITHUB_TOKEN:-}
      GITHUB_ORG: ${GITHUB_ORG:-}
      DATABASE_URL: ${DATABASE_URL}
    restart: unless-stopped

  proxy:
    image: nginx:1.28.0
    depends_on:
      - app
    ports:
      - "443:443"
    volumes:
      - ./infra/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro
      - ./infra/nginx/certs:/etc/nginx/certs:ro
    restart: unless-stopped
  1. Copy .env, TLS assets (infra/nginx/certs/*), and either the source tree or a pre-built image tarball to the server.
  2. Run docker compose up -d --build the first time (or --force-recreate after loading a new tarball) to start both services. Compose injects the environment values from .env and mounts the nginx configuration automatically.

Using docker run for each container

If you prefer to manage containers manually, mirror the compose setup by starting the app and proxy containers yourself:

  1. Load the image and create a shared network:

    docker load -i github-dashboard-0.2.0.tar
    docker network create github-dashboard-net || true
  2. Launch the application container with the required environment variables:

    docker stop github-dashboard-app 2>/dev/null || true
    docker rm github-dashboard-app 2>/dev/null || true
    docker run -d \
      --name github-dashboard-app \
      --env-file .env \
      --network github-dashboard-net \
      github-dashboard:0.2.0
  3. Start the nginx proxy container and expose HTTPS:

    docker stop github-dashboard-proxy 2>/dev/null || true
    docker rm github-dashboard-proxy 2>/dev/null || true
    docker run -d \
      --name github-dashboard-proxy \
      --network github-dashboard-net \
      -p 443:443 \
      -v $(pwd)/infra/nginx/default.conf:/etc/nginx/conf.d/default.conf:ro \
      -v $(pwd)/infra/nginx/certs:/etc/nginx/certs:ro \
      nginx:1.28.0

Maintain the same .env file (or equivalent secrets store) for both containers and restart them with docker restart <name> when shipping updates.

Project Structure

src/app/             → Next.js App Router routes, layouts, and API handlers
src/components/      → Shared UI components (shadcn/ui + custom)
src/lib/             → Utilities (db, auth, GitHub client, sync scheduler)
src/lib/auth/        → GitHub OAuth, session, and membership helpers
docs/                → Setup guides (e.g., GitHub OAuth registration)
tests/               → Vitest specs, Playwright E2E suites, and helpers
public/              → Static assets served by Next.js
infra/               → Docker/nginx assets for HTTPS proxying

Utility scripts

  • scripts/backfill-social-signals.ts — one-off helper to rebuild the activity snapshot and social-signal caches. Run it when you want a full refresh outside the usual sync pipeline; add --signals-only to rebuild only the social-signal tables.
  • scripts/backfill-node-ownership.ts — fixes “card shows repo A but link opens repo B” situations by re-fetching the canonical repository/URL for each affected issue or discussion and upserting it back into PostgreSQL. The default invocation (pnpm run backfill:ownership) processes up to 500 candidates per run; prepend --dry-run to log what would change without writing, pass --limit <n> or --chunk-size <n> to tune throughput, and use --id <nodeID> (repeatable) to inspect or repair specific GitHub nodes.
  • scripts/db/backup.mjs — wraps pg_dump so you can produce database backups with a single command. Supports --dir, --label, --format, and --pg-dump flags for customizing the output location and filename.
  • scripts/run-db-tests.mjs — verifies that PostgreSQL (or Testcontainers) is available, then runs vitest with vitest.db.config.ts. If no database is reachable it exits early, letting CI/local workflows skip DB-backed tests.

Environment

Environment variables are parsed through src/lib/env.ts:

Variable Required Description
GITHUB_TOKEN GitHub token with read:user + repository metadata scope
GITHUB_ORG Organization login to target for data collection
GITHUB_OAUTH_CLIENT_ID GitHub OAuth App client identifier
GITHUB_OAUTH_CLIENT_SECRET GitHub OAuth App client secret
GITHUB_ALLOWED_ORG GitHub organization slug allowed to sign in; non-admin users stay blocked until allowed teams or members are configured in Settings
GITHUB_ALLOWED_BOT_LOGINS Comma-separated GitHub logins that may sign in even if they are not organization members (for example octoaide)
DASHBOARD_ADMIN_IDS Comma-separated GitHub logins or node IDs with admin privileges
APP_BASE_URL Absolute origin used to build OAuth callback URLs
SESSION_SECRET Secret key for signing session cookies
DATABASE_URL PostgreSQL connection string
SYNC_INTERVAL_MINUTES ⛔ (default 60) Interval for automatic incremental sync
TODO_PROJECT_NAME Optional GitHub Projects board name to mirror issue metadata

Define them in .env.local for local development or provide them via your hosting platform. Docker Compose reads from .env in the project root.

About

Next.js-based GitHub Dashboard

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •  

Languages