Skip to content

Latest commit

Β 

History

History
158 lines (104 loc) Β· 12.1 KB

File metadata and controls

158 lines (104 loc) Β· 12.1 KB

🏀 Backoffice

Backoffice gives your AI assistant β€” Claude.ai, ChatGPT, or any other β€” its own Linux machine that keeps working even when you're not in a conversation.

  • Use any CLI β€” no MCPs needed.
  • Run cron jobs β€” schedule tasks between conversations.
  • Persist data β€” files, memory, and state survive across sessions.
  • Store credentials securely β€” API keys live on the machine, not in the chat.

Why

AI assistants like Claude and ChatGPT can use MCPs to access external services, but the library of available MCPs is limited. If a service doesn't have an MCP, you're stuck searching for third-party providers.

This is what Backoffice aims to solve. It gives Claude, ChatGPT, or any other AI assistant app, a remote Linux machine so that it has a command line it can use with minimal restrictions.

How I use it

I use Backoffice for things like working with Strava, managing Google Tasks, summarizing video content, coding on projects when I’m on the go, and making restaurant reservations.

These are all things the Claude app can’t do natively, would struggle with (because there’s no persistent state), or could only do if I leave my laptop on (through Cowork).

I could use a personal assistant like OpenClaw or my own Greg instead of Backoffice, but I much prefer the Claude app to Telegram or Slack for AI work. Claude also improves quickly, and when I built Greg most of the work felt like replicating what Claude could already do. Backoffice takes a different approach: instead of rebuilding what Claude can already do, I add only what it can’t do.

Quick Start

First:

Deploy on Railway

Then:

  1. Add https://your-app.up.railway.app/mcp as an MCP at your favorite AI assistant.
  2. Your assistant will prompt you for a password, this in a random string that can be found in the Railway service logs.
  3. Start a new conversation with your assistant. It will now have access to the remote machine through the Backoffice MCP.

The one-click install sets up this repo as a Railway app, will mount a volume on /data to persist data (see "Persisting Data" below), and sets up a health check for GET /version so that Railway monitors the health of the service using that endpoint.

Manual Setup

πŸ€– Option 1: Ask your coding agent

claude "Read this: https://kvendrik.com/backoffice/AGENT.md"

πŸ™‹β€β™‚οΈπŸ™‹β€β™€οΈ Option 2: DIY

1. Clone

git clone git@github.com:kvendrik/backoffice.git

2. Deploy to Railway

Or any other remote-machines service like Fly.io. On Railway however this works out of the box β€” the server reads RAILWAY_PUBLIC_DOMAIN automatically. For other hosts, set PUBLIC_BASE_URL to your public origin.

brew install railway
railway login
railway up

3. Connect

  1. Add https://your-app.up.railway.app/mcp as an MCP.
  2. You'll be prompted for a passphrase which you can find in the startup logs. Backoffice logs it on startup.

4. Use it

Start a new conversation with your assistant. It will now have access to the remote machine through the Backoffice MCP.

Authentication

Backoffice comes with full OAuth. Apps like Claude.ai handle the entire flow automatically β€” no client ID or secret needs to be configured manually.

The OAuth consent screen requires a passphrase before issuing tokens. A passphrase is auto-generated on startup and printed to stdout. Set AUTH_PASSPHRASE to use your own.

OAuth state is in-memory only. Tokens are lost on restart or redeploy β€” you will need to re-authenticate after each deploy.

Environment variables

Variable Default Description
AUTH_PASSPHRASE (random, logged on startup) Passphrase required on the OAuth consent screen. Set this to a strong secret so it never appears in logs.
ALLOWED_REDIRECT_URI_DOMAINS claude.ai Comma-separated list of domains that OAuth clients are allowed to register redirect URIs for. Registrations with a redirect_uri on a domain not in this list are rejected. Set to claude.ai,localhost to also allow local clients.
USE_MCP_TOKEN_AUTH false Set to 1 to replace OAuth with a single static bearer token. Simpler, but no per-client visibility in logs. The token is read from MCP_TOKEN or auto-generated and written to .mcp-token.
MCP_TOKEN (auto-generated) Static bearer token. Only used when USE_MCP_TOKEN_AUTH=1.
PUBLIC_BASE_URL (derived from RAILWAY_PUBLIC_DOMAIN) Public origin of the server (e.g. https://your-app.up.railway.app). Required on non-Railway hosts.
PORT 3000 Port the server listens on.

Tools

Tool Purpose
shell Run any bash command on the machine. Working directory and environment persist across calls. Output is capped at 1 MB per stream by default (configurable via max_output_bytes). Credentials set via env_set are automatically injected.
patch_file Apply a structured line-based patch to a file. Useful for targeted edits to specific lines in large files without rewriting the whole thing.
env_set Persist an environment variable. Stored on disk and automatically injected into every shell call. Use for credentials and API keys β€” values are not returned to the conversation.
env_delete Remove a persisted environment variable.
memory_read Read the persistent memory file (/data/MEMORY.md). Called at the start of every conversation to recall context from previous sessions.
memory_write Write to the persistent memory file. The AI proactively saves anything useful across conversations: installed CLIs, useful paths, environment quirks, user preferences, and how to use specific tools/APIs/services.
memory_append Append content to the memory file. The simplest way to add new information β€” no format overhead, no context-mismatch risk.
memory_patch Apply a targeted patch to the memory file using the same *** Begin Patch format as patch_file. Use for surgical replacements of known stale content.
get_instructions Return the full system instructions for the MCP server. The AI can call this if it needs guidance on conventions or tool usage.

Security

The server runs as a non-root user (appuser). This means the OS itself enforces what the process can and can't touch.

What's protected:

  • System directories (/usr, /bin, /etc, etc.) β€” root-owned, unwritable
  • App source (/app) β€” root-owned, unwritable

What's writable:

  • /data β€” persistent volume, owned by appuser
  • /tmp β€” ephemeral scratch space

Persisting Data

By default Railway spins up a fresh container on every deploy. To persist data add a Volume in your Railway service settings and mount it at /data. The AI is instructed to use /data for memory and credentials (via env_set), so this path matters. See Railway's Volumes docs for details.

Packages installed via bun install -g go to /data/bun and packages installed via brew install go to /data/homebrew β€” both paths are on the persistent volume, so installed tools survive redeploys automatically.

Skills

Backoffice ships with a bunch of AgentSkills. It also adds new skills in /data/skills when you ask for it.

Skill What it does
llm Delegate long-running agentic tasks to run in the background
auto-research Autonomous iterative research loops
cron Schedule recurring tasks
plan-mode Iterate on a planning doc before shipping code via PR
fabric Summarize, analyze, or transform content using Fabric prompt patterns
git / github Commits, branches, PRs via the GitHub CLI
telegram Send notifications to Telegram
share Generate tokenized download links for files
self-modify Change Backoffice's own source, tools, or instructions
self-improve Analyze failure logs and auto-fix recurring issues
optimize-memory Audit and clean up the memory file

Fun examples

A couple of fun examples of what you can do with this:

  • cron + github + llm β€” poll GitHub issues on a schedule, process comments through the LLM, and post replies automatically
  • auto-research + fabric + telegram β€” deep-dive a topic, distill it with Fabric, and ping you the summary on Telegram
  • cron + self-improve + github β€” nightly job that analyzes logs, improves the Backoffice, and puts up PRs while you sleep

Logs

Backoffice keeps logs of all tool calls (includes caller oauth details) and results in /data/log.jsonl. Analyzing this file can help you figure out how to improve your setup:

claude "Here are the logs from the Backoffice railway server and show how the AI assistant has been using Backoffice. Tell me what you notice.\n\n---\n\n$(railway ssh -- cat /data/log.jsonl)"