-
Notifications
You must be signed in to change notification settings - Fork 187
Description
Hi! We are using repack version 1.5.2
pg_repack | 1.5.2 | public | Reorganize tables in PostgreSQL databases with minimal locks
We faced with issue, that repack failed
DEBUG: ---- swap ----
ERROR: query failed: ERROR: canceling statement due to lock timeout
DETAIL: query was: SELECT repack.repack_swap($1)
DEBUG: Disconnecting worker 0.
DEBUG: Disconnecting worker 1.
DEBUG: Disconnecting worker 2.
DEBUG: Disconnecting worker 3.
I checked the postgres log and found this
2025-10-12 04:05:27 UTC [postgres] [20154]: [1] LOG: process 20154 still waiting for AccessExclusiveLock on relation 216037684 of database 216037410 after 1000.030 ms
2025-10-12 04:05:27 UTC [postgres] [20154]: [2] DETAIL: Process holding the lock: 24977. Wait queue: 20154.
2025-10-12 04:05:27 UTC [postgres] [20154]: [3] STATEMENT: SELECT repack.repack_swap($1)
The process with pid 20154 is autovacuum to prevent wraparound
-[ RECORD 1 ]----+-----------------------------------------------------------------------
pid | 24977
query | autovacuum: VACUUM pg_toast.pg_toast_216037680 (to prevent wraparound)
So, here's how I see the situation:
- Repack was trying to execute the repack_swap function, but it did so without a minimal lock timeout in the loop. Why is that?
- While Repack had the exclusive lock on the table, it locked all queries on it.
- After 45 seconds, the repack_swap operation failed due to a lock timeout.
2025-10-12 04:06:11 UTC [postgres] [20154]: [4] ERROR: canceling statement due to lock timeout
2025-10-12 04:06:11 UTC [postgres] [20154]: [5] STATEMENT: SELECT repack.repack_swap($1)
That's lock timeout from our side
db=# show lock_timeout
lock_timeout
--------------
45s
(1 row)
So, if that was the case of the simple autovacuum ,not to prevent wraparound, it would be cancelled by Postgres (because repack_swap was trying to hold the lock for a long time), but that doesn't work with autovacuum to prevent wraparound. At that stage of the repack execution, I expected the repack to execute repack_swap in a loop with a lock_timeout of 1 second (or less), but it simply stuck for a long time and locked the entire table. I think that, at that stage, repack could have terminated the autovacuum process without any problems because the source table would be dropped. Maybe we need to add it in the repack_swap function?