A self-hosted LLM proxy service combining LiteLLM and Antigravity Manager to enable integration with a wider variety of LLM providers. Designed for enterprises that need centralized control over accounts and API keys. Includes monitoring capabilities and is deployed on Docker Swarm, managed via Portainer.
Docker Swarm Stack (llmproxy.yaml) deployed via Portainer:
litellm: Core proxy server handling API requestsantigravity-manager: Proxy for Anthropic Claude modelsdb: PostgreSQL database for LiteLLM usage logs and model configurationsdb-cleanup: Scheduled job to prune old spend logs (prevents disk exhaustion)
Networks: Internal overlay network for inter-service communication, external public network with Traefik for HTTPS routing.
The stack includes automatic cleanup mechanisms to prevent disk exhaustion:
The db-cleanup service runs weekly (Sunday 3:00 AM) to prune old spend logs:
- Deletes logs older than 90 days (configurable via
DB_CLEANUP_RETENTION_DAYS) - Runs
VACUUM ANALYZEto reclaim disk space - Uses swarm-cronjob for scheduling
To run cleanup manually:
docker service scale llmproxy_db-cleanup=1Netdata is configured to limit disk usage to 10GB (monitoring/configs/netdata.conf), providing approximately 2-4 weeks of metrics retention.
# Install ptctools
uv tool install ptctools --from git+https://github.com/tamntlib/ptctools.gitAdd the following records to your DNS:
- portainer.example.com
scp portainer/portainer.yaml root@<ip>:/root/portainer.yamlhttps://docs.docker.com/engine/install/ubuntu/#install-using-the-repository
docker swarm init
LETSENCRYPT_EMAIL=<email> PORTAINER_HOST=<host> docker stack deploy -c /root/portainer.yaml portainerAdd the following records to your DNS:
- llm.example.com (for LiteLLM)
- antigravity.example.com (for Antigravity Manager)
Copy .env.example to .env and fill in the values:
cp .env.example .envRequired environment variables:
DB_USER,DB_PASSWORD,DB_NAME: PostgreSQL credentialsLITELLM_MASTER_KEY,LITELLM_HOST: LiteLLM configurationANTIGRAVITY_MANAGER_HOST,ANTIGRAVITY_MANAGER_API_KEY,ANTIGRAVITY_MANAGER_WEB_PASSWORD: Antigravity Manager configuration
export PORTAINER_URL=https://portainer.example.com
export PORTAINER_ACCESS_TOKEN=<token>
# Set config
ptctools docker config set -n llmproxy_litellm-config-yaml -f 'configs/litellm.yaml'
# Deploy stacks
ptctools docker stack deploy -n llmproxy-data -f 'llmproxy-data.yaml' --ownership team
ptctools docker stack deploy -n llmproxy -f 'llmproxy.yaml' --ownership teamcd litellm_scripts
# Full sync of credentials, models, aliases, and fallbacks
python3 config.py --only credentials,models,aliases,fallbacks --force --prune
# Sync specific components
python3 config.py --only models --force
python3 config.py --only aliases,fallbacks
# Create API key
python3 create_api_key.pyRequires environment variables in litellm_scripts/.env: LITELLM_API_KEY, LITELLM_BASE_URL
| File | Description |
|---|---|
llmproxy.yaml |
Docker Stack definition with all services and Traefik labels |
configs/litellm.yaml |
LiteLLM internal config (batch writes, connection pools, logging) |
litellm_scripts/config.json |
Base config defining providers, model groups, aliases, and fallbacks |
litellm_scripts/config.local.json |
Local overrides including API keys (gitignored, deep-merged with config.json) |
.env |
Environment variables (DB credentials, hostnames, API keys) |
Create litellm_scripts/config.local.json to add API keys and local overrides:
{
"providers": {
"my-provider": {
"api_key": "sk-your-api-key-here"
},
"another-provider": {
"api_key": "sk-another-key"
}
}
}This file is deep-merged with config.json, so you only need to specify overrides (like API keys).
Individual models can override the provider-level access_groups by specifying access_groups in their model config:
{
"providers": {
"my-provider": {
"access_groups": ["General"],
"models": {
"model-a": null,
"model-b": {
"access_groups": ["Premium"]
}
}
}
}
}model-ainherits the provider-levelaccess_groups:["General"]model-buses its ownaccess_groups:["Premium"]
# Volume backup/restore (uses Duplicati)
ptctools docker volume backup -v vol1,vol2 -o s3://mybucket
ptctools docker volume restore -i s3://mybucket/vol1 # volume name derived from URI path
ptctools docker volume restore -v vol1 -i s3://mybucket/vol1 # explicit volume name
# Database backup/restore (uses minio/mc for S3) or can backup/restore db volume like above
ptctools docker db backup -c container_id -v db_data \
--db-user postgres --db-name mydb -o backup.sql.gz
ptctools docker db backup -c container_id -v db_data \
--db-user postgres --db-name mydb -o s3://mybucket/backups/db.sql.gz
ptctools docker db restore -c container_id -v db_data \
--db-user postgres --db-name mydb -i backup.sql.gz
ptctools docker db restore -c container_id -v db_data \
--db-user postgres --db-name mydb -i s3://mybucket/backups/db.sql.gzNetdata monitoring stack with auto-discovery for system, container, and database metrics.
Add the following record to your DNS:
- netdata.example.com
Copy monitoring/.env.example to monitoring/.env and fill in the values:
cp monitoring/.env.example monitoring/.envRequired environment variables:
NETDATA_HOST: Hostname for Netdata dashboardNETDATA_BASIC_AUTH: Basic auth credentials (generate withhtpasswd -nb admin yourpassword | sed -e s/\\$/\\$\\$/g)
cd monitoring
# Upload configs
ptctools docker config set -n monitoring_netdata-conf -f 'configs/netdata.conf'
ptctools docker config set -n monitoring_config-generator-script -f 'scripts/netdata-config-generator.sh'
# Deploy monitoring stack
ptctools docker stack deploy -n monitoring -f 'netdata.yaml' --ownership teamServices can self-register for PostgreSQL monitoring by adding Docker labels:
deploy:
labels:
- netdata.postgres.name=my_database
- netdata.postgres.dsn=postgresql://user:pass@host:5432/dbname
networks:
- monitoringThe service must also join the monitoring network. See CLAUDE.md for full details.