Skip to content

VesselWave/supabase-log-poller

Repository files navigation

Supabase Log Poller

A lightweight Python "sidecar" agent that polls Supabase project logs (Postgres, Auth, Edge Functions, etc.) via the Management API and writes them to a local JSON file. This file is intended to be tailed by Grafana Alloy (or Promtail) to ship logs to Grafana Loki.

Architecture

  1. Poller (Python): Runs every minute via Cron. Fetches new logs using the last seen timestamp (cursor). See LOG_SOURCES.md for a list of all supported log types.
  2. State: Keeps track of the last fetched timestamp in supabase_log_state.json to prevent duplicates.
  3. Output: Appends structured JSON logs to supabase_all.json.
  4. Shipper (Alloy): Reads supabase_all.json and pushes to Grafana Cloud.

Prerequisites

  • Python 3.10+
  • A Supabase Project
  • A Personal Access Token (PAT) from Supabase (not a project API key).
  • Grafana Alloy installed and configured on the host.

Installation

  1. Clone the repository:

    git clone https://github.com/your-username/supabase-log-poller.git
    cd supabase-log-poller
  2. Set up Python Environment:

    python3 -m venv venv
    source venv/bin/activate
    pip install -r requirements.txt
  3. Configuration: Copy the example environment file and edit it:

    cp .env.example .env
    nano .env
    • SUPABASE_PROJECT_REF: Found in your project URL (e.g., https://supabase.com/dashboard/project/abcdefgh).
    • SUPABASE_ACCESS_TOKEN: Generate at Supabase Dashboard > Account > Tokens.
    • LOG_FILE_PATH: Where logs will be written (ensure the user has write permissions, e.g., /tmp/supabase_all.json or /var/log/supabase_all.json if permissions allow).

Usage (Systemd User Service)

We use a Systemd Timer to run the poller every minute as a user service.

  1. Prepare the Files: The systemd configuration files are located in the systemd/ directory of this project.

  2. Install to User Systemd Directory:

    mkdir -p ~/.config/systemd/user/
    cp systemd/supabase-poller.* ~/.config/systemd/user/
  3. Enable and Start the Timer:

    systemctl --user daemon-reload
    systemctl --user enable --now supabase-poller.timer
  4. Verify:

    • Check timer status: systemctl --user list-timers --all
    • Check service logs: journalctl --user -u supabase-poller.service
    • Check output logs: tail -f supabase_logs.json

Grafana Alloy Configuration

A sample configuration is provided in sample-config.alloy. You can use it as a starting point for your own configuration. It includes blocks for VPS logs, GitHub Runner logs, and Supabase logs.

// Supabase Logs
loki.source.file "supabase_logs" {
  targets    = [
    {
        __path__ = sys.env("LOG_FILE_PATH"),
        job      = "supabase",
    },
  ]
  forward_to = [loki.process.logs_integrations.receiver]
}

// Generic interface for log integrations
loki.process "logs_integrations" {
  // Supabase Logic
  stage.match {
    selector = "{job=\"supabase\"}"
    
    stage.json {
      expressions = {
        ts          = "timestamp",
        msg         = "message",
        log_source  = "source",
        meta        = "metadata",
      }
    }

    stage.timestamp {
      source = "ts"
      format = "RFC3339"
    }

    stage.labels {
      values = {
        service = "log_source",
      }
    }

    stage.output {
      source = "msg"
    }
  }

  forward_to = [loki.write.grafana_cloud_loki.receiver]
}

Viewing Logs in Grafana

Once the poller is running and generating supabase_all.json, you need to run Grafana Alloy to ship the logs and then use LogQL to view them.

1. Run Grafana Alloy

Ensure your environment variables for Loki are set, then run Alloy pointing to the sample-config.alloy file:

# Set required variables (or add them to your .env)
export LOG_FILE_PATH="$(pwd)/supabase_all.json"
export LOKI_URL="your_loki_endpoint"
export LOKI_USERNAME="your_loki_user_id"
export LOKI_PASSWORD="your_grafana_cloud_api_key"

# Run Alloy
alloy run sample-config.alloy

2. Query Logs in Grafana

  1. Log in to your Grafana instance.
  2. Navigate to Explore (compass icon).
  3. Select your Loki data source.
  4. Use the following LogQL queries:
  • All Supabase logs:
    {job="supabase"}
    
  • Specific log source (e.g., Postgres):
    {source="postgres_logs"}
    
  • PostgREST API logs:
    {source="postgrest_logs"}
    
  • Edge network / Proxy logs:
    {source="edge_logs"}
    
  • Auth logs:
    {source="auth_logs"}
    

Note: Since this poller backfills logs, ensure your time range in Grafana (top right) is set to include the time when the events actually occurred (e.g., "Last 1 hour" or "Last 6 hours").

Troubleshooting

  • "Error: SUPABASE_PROJECT_REF... must be set": Ensure your .env file is populated and EnvironmentFile in supabase-poller.service points to the correct absolute path.
  • Permission Denied: Check if the user running the systemd service has write access to LOG_FILE_PATH and STATE_FILE_PATH.
  • Duplicates: Delete supabase_log_state.json to reset the cursor (this will re-fetch the last few minutes of logs).

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published