-
Notifications
You must be signed in to change notification settings - Fork 2.9k
Description
Issue Description
When using .volume files, these produce services with RemainAfterExit=Yes. This means a volume is created only once and when it is subsequently deleted (which I did because I had updated the volume options in the .volume file), it is not recreated by the quadlet service files (and instead created by podman run implicitly with default options).
Steps to reproduce the issue
❯ sudo tee /etc/containers/systemd/foo.container > /dev/null << EOF
[Container]
Image=docker.io/hello-world:latest
Volume=foo.volume:/foo
EOF
❯ sudo tee /etc/containers/systemd/foo.volume > /dev/null << EOF
[Volume]
User=1000
EOF
❯ sudo systemctl daemon-reload
❯ sudo systemctl start foo
❯ sudo ls -ld /var/lib/containers/storage/volumes/systemd-foo/_data
drwxr-xr-x 2 matthijs root 4096 Jan 5 18:27 /var/lib/containers/storage/volumes/systemd-foo/_data
❯ sudo podman volume rm systemd-foo
systemd-foo
❯ sudo systemctl start foo
❯ sudo ls -ld /var/lib/containers/storage/volumes/systemd-foo/_data
drwxr-xr-x 2 root root 4096 Jan 5 18:27 /var/lib/containers/storage/volumes/systemd-foo/_data
Describe the results you received
The volume is not recreated by the foo-volume.service with the right options (uid=1000). Instead, the container is started and podman implicitly creates the volume with the default options (uid=0).
This is also visible in the journal, the first time foo-volume.service is started, but the second time it is not:
❯ sudo journalctl -u foo.service -u foo-volume.service -e
Jan 05 18:27:14 zozo systemd[1]: Starting foo-volume.service...
Jan 05 18:27:14 zozo podman[55776]: 2026-01-05 18:27:14.331612086 +0100 CET m=+0.023455833 volume create systemd-foo
Jan 05 18:27:14 zozo foo-volume[55776]: systemd-foo
Jan 05 18:27:14 zozo systemd[1]: Finished foo-volume.service.
Jan 05 18:27:14 zozo systemd[1]: Starting foo.service...
Jan 05 18:27:14 zozo podman[55792]: 2026-01-05 18:27:14.370545545 +0100 CET m=+0.021431909 container create 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f (image=docker.io/library/h>
Jan 05 18:27:14 zozo podman[55792]: 2026-01-05 18:27:14.357800554 +0100 CET m=+0.008686935 image pull 1b44b5a3e06a9aae883e7bf25e45c100be0bb81a0e01b32de604f3ac44711634 docker.io/hello-world:latest
Jan 05 18:27:14 zozo podman[55792]: 2026-01-05 18:27:14.46780586 +0100 CET m=+0.118692244 container init 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f (image=docker.io/library/hell>
Jan 05 18:27:14 zozo podman[55792]: 2026-01-05 18:27:14.473270748 +0100 CET m=+0.124157116 container start 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f (image=docker.io/library/he>
Jan 05 18:27:14 zozo systemd-foo[55914]: Hello from Docker!
Jan 05 18:27:14 zozo systemd[1]: Started foo.service.
Jan 05 18:27:14 zozo foo[55792]: 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f
Jan 05 18:27:14 zozo podman[55945]: 2026-01-05 18:27:14.494566818 +0100 CET m=+0.013907817 container died 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f (image=docker.io/library/hel>
Jan 05 18:27:14 zozo podman[55945]: 2026-01-05 18:27:14.892265994 +0100 CET m=+0.411607130 container remove 59254861fda8aa05f5eaa939459f7ff4ae8d410b5fd9804702cd07fb8807de1f (image=docker.io/library/h>
Jan 05 18:27:14 zozo systemd[1]: foo.service: Deactivated successfully.
Jan 05 18:27:28 zozo systemd[1]: Starting foo.service...
Jan 05 18:27:29 zozo podman[56286]: 2026-01-05 18:27:29.00155174 +0100 CET m=+0.026409568 volume create systemd-foo
Jan 05 18:27:29 zozo podman[56286]: 2026-01-05 18:27:29.010531364 +0100 CET m=+0.035389202 container create bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552 (image=docker.io/library/h>
Jan 05 18:27:29 zozo podman[56286]: 2026-01-05 18:27:28.984573286 +0100 CET m=+0.009431165 image pull 1b44b5a3e06a9aae883e7bf25e45c100be0bb81a0e01b32de604f3ac44711634 docker.io/hello-world:latest
Jan 05 18:27:29 zozo podman[56286]: 2026-01-05 18:27:29.111775103 +0100 CET m=+0.136632946 container init bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552 (image=docker.io/library/hel>
Jan 05 18:27:29 zozo podman[56286]: 2026-01-05 18:27:29.115609442 +0100 CET m=+0.140467272 container start bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552 (image=docker.io/library/he>
Jan 05 18:27:29 zozo systemd[1]: Started foo.service.
Jan 05 18:27:29 zozo systemd-foo[56407]:
Jan 05 18:27:29 zozo systemd-foo[56407]: Hello from Docker!
Jan 05 18:27:29 zozo foo[56286]: bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552
Jan 05 18:27:29 zozo podman[56437]: 2026-01-05 18:27:29.132423223 +0100 CET m=+0.013521511 container died bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552 (image=docker.io/library/hel>
Jan 05 18:27:29 zozo podman[56437]: 2026-01-05 18:27:29.526874745 +0100 CET m=+0.407973013 container remove bed42283a7129114229d67cc03eb7a454328de82362c0c7db83ac3f1ead13552 (image=docker.io/library/h>
Jan 05 18:27:29 zozo systemd[1]: foo.service: Deactivated successfully.
Describe the results you expected
I would expect the volume to be recreated with the original options (or, which was my original case, possibly with updated options if they were changed in the meanwhile).
Suggested fix
The obvious fix for this issue would be to not set RemainAfterExit=yes for volume units. If I set that explicitly in the volume file, things behave as expected:
❯ sudo tee /etc/containers/systemd/foo.volume > /dev/null << EOF
[Volume]
User=1000
[Service]
RemainAfterExit=no
EOF
❯ sudo tee /etc/containers/systemd/foo.container > /dev/null << EOF
[Container]
Image=docker.io/hello-world:latest
Volume=foo.volume:/foo
EOF
❯ sudo systemctl daemon-reload
❯ sudo systemctl start foo
❯ sudo ls -ld /var/lib/containers/storage/volumes/systemd-foo/_data
drwxr-xr-x 2 matthijs root 4096 Jan 5 18:40 /var/lib/containers/storage/volumes/systemd-foo/_data
❯ sudo podman volume rm systemd-foo
systemd-foo
❯ sudo systemctl start foo
❯ sudo ls -ld /var/lib/containers/storage/volumes/systemd-foo/_data
drwxr-xr-x 2 matthijs root 4096 Jan 5 18:40 /var/lib/containers/storage/volumes/systemd-foo/_data
AFAICS that would involve changing true to false in this line:
podman/pkg/systemd/quadlet/quadlet.go
Line 1197 in ed5f4d6
| defaultOneshotServiceGroup(service, true) |
There might be unintended consequences - I just started using podman, so I cannot oversee the results of this change well :-)
podman info output
host:
arch: amd64
buildahVersion: 1.42.1
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon_2.1.12-4_amd64
path: /usr/bin/conmon
version: 'conmon version 2.1.12, commit: unknown'
cpuUtilization:
idlePercent: 97.72
systemPercent: 0.49
userPercent: 1.8
cpus: 22
databaseBackend: sqlite
distribution:
codename: plucky
distribution: ubuntu
version: "25.04"
eventLogger: journald
freeLocks: 2048
hostname: zozo
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.14.0-36-generic
linkmode: dynamic
logDriver: journald
memFree: 2017853440
memTotal: 32511954944
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns_1.12.2-2_amd64
path: /usr/lib/podman/aardvark-dns
version: aardvark-dns 1.12.2
package: netavark_1.12.1-9_amd64
path: /usr/lib/podman/netavark
version: netavark 1.12.1
ociRuntime:
name: runc
package: runc_1.3.3-0ubuntu1~25.04.3_amd64
path: /usr/bin/runc
version: |-
runc version 1.3.3-0ubuntu1~25.04.3
spec: 1.2.1
go: go1.24.2
libseccomp: 2.5.5
os: linux
pasta:
executable: /usr/bin/pasta
package: passt_0.0~git20250217.a1e48a0-1_amd64
version: ""
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootlessNetworkCmd: pasta
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /usr/share/containers/seccomp.json
selinuxEnabled: false
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns_1.2.1-1build2_amd64
version: |-
slirp4netns version 1.2.1
commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
libslirp: 4.8.0
SLIRP_CONFIG_VERSION_MAX: 4
libseccomp: 2.5.5
swapFree: 27957260288
swapTotal: 34359734272
uptime: 531h 30m 56.00s (Approximately 22.12 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries: {}
store:
configFile: /home/matthijs/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/matthijs/.local/share/containers/storage
graphRootAllocated: 1055742447616
graphRootUsed: 973944573952
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 0
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/matthijs/.local/share/containers/storage/volumes
version:
APIVersion: 5.7.0
BuildOrigin: Debian
Built: 1764725792
BuiltTime: Wed Dec 3 02:36:32 2025
GitCommit: ""
GoVersion: go1.24.9
Os: linux
OsArch: linux/amd64
Version: 5.7.0Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
No (but one point release older, and looking at the code this did not change in git since then).