Auto-optimizations for postgres to maximize speed and utilization of host hardware.
autopg should be a direct replacement for using the postgres docker image in your architecture - be that Docker, Kubernetes, etc.
For example, in docker-compose.yml file, add the following:
services:
postgres:
image: ghcr.io/piercefreeman/autopg:pg16-latest
ports:
- 5432:5432We use reasonable system defaults when you launch without any env variables. We will sniff the docker container for its host machine's hardware and use that to generate the postgresql.conf file. But if you want to customize your specific deployment we support two methods:
- Env overrides for autopg inputs
| Env Variable | Default | Values | Description |
|---|---|---|---|
AUTOPG_DB_TYPE |
WEB |
WEB, OLTP, DW, DESKTOP, MIXED |
What kind of application will be using the db |
AUTOPG_TOTAL_MEMORY_MB |
None |
Integer | Total memory of the host in MB |
AUTOPG_CPU_COUNT |
None |
Integer | Number of CPUs on the host |
AUTOPG_NUM_CONNECTIONS |
100 |
Integer | Max number of concurrent connections to the db |
AUTOPG_PRIMARY_DISK_TYPE |
None |
SSD, HDD, SAN |
Type of the primary disk |
AUTOPG_ENABLE_PG_STAT_STATEMENTS |
true |
true, false |
Enable pg_stat_statements extension for query statistics |
- Existing
postgresql.conffile
Mount a postgresql.conf file into the container at /etc/postgresql/postgresql.conf. Any values explicitly provided in the postgresql.conf file will override the values generated by autopg. We'll also merge in any values from this file that autopg does not support directly, so this is a great way to add additional custom settings.
We build images following {postgres_version}-{autopg_version} tags. Use this table to find your desired version:
| Postgres Version | Autopg Version | Tag |
|---|---|---|
| 18 | latest | autopg:18-latest |
| 17 | latest | autopg:17-latest |
| 16 | latest | autopg:16-latest |
| 15 | latest | autopg:15-latest |
| 14 | latest | autopg:14-latest |
Sequential scans can absolutely kill the performance of your webapp, since they require the database engine to iterate through all your table's data instead of just pulling from a much quicker index cache.
We automatically configure your database with pg_stat_statements, which transparently captures queries that you run against the database. It puts the results in a regular postgres table so you can aggregate the stats like you do with any other Postgres data. While you're free to login as an admin user and query these stats yourself, we bundle a simple webapp to visualize these logs. For security this is disabled by default - if you want to enable it (which should only be done in firewalled deployments), you can run:
autopg:
environment:
AUTOPG_ENABLE_WEBAPP: true
AUTOPG_WEBAPP_HOST: 0.0.0.0
AUTOPG_WEBAPP_PORT: 8000This provides an interface that currently:
- Reports sequential scans and the queries to reproduce
- Reports on the current size of the indexes
- Shows the average and aggregate time spent on different queries
- Shows currently running queries
- Recommends a
EXPLAIN ANALYZEcommand you can run to inspect the query plan
From there you can create indexes yourself on the most problematic indexes. I suggest running the EXPLAIN ANALYZE how the engine is routing the request, then creating an index to target this slow query, then finally confirming with another analyze.
The algorithm is a direct Python conversion from pgtune. If you notice any discrepancies in output from the two tools, please report them to Issues (or better yet - add a test case).
uv syncuv run autopgTo test the docker build pipeline locally, run:
docker build --build-arg POSTGRES_VERSION=16 -t autopg .docker run -e POSTGRES_USER=test_user -e POSTGRES_PASSWORD=test_password autopgIf you'd like to test out the our analysis webapp, you can boot up the benchmarks. More info is available in that README.
cd benchmarks
docker compose upWe have unit tests for the logic and integration tests for the docker image.
uv run pytest -vvvuv run pytest -vvv -m "integration"- Right now we write the optimization logic in Python so our postgres container relies on having a python interpreter installed. This adds a bit to space overhead and is potentially a security risk. We'd rather bundle a compiled binary that serves the same purpose.

