Skip to content

Proxmox

Allex edited this page Oct 2, 2025 · 6 revisions

VLAN

First setup VLAN for the rest of the network.

Leave the port connected to the proxmox node as 'untagged' and ensure there is a primary VLAN.

guest

To assign VMs/containers to a VLAN, in the network setting of the node mark the bridge you a using (vmbr0) as "VLAN-aware".

Then in either the hardware settings of the VM or the network settings of the container, add the VLAN tag.

Since the port on the switch is still 'untagged' this will remove access to your guests.

host

Once you switch the port on your switch to 'tagged', you'll regain access to the guests, but you'll lose access to the management UI, if you are accessing proxmox from a VLAN, since it is on the default VLAN 1.

To retain access, add a Linux VLAN with the name vmbr0.<vlan id> and assign it an IP before switching to 'tagged'. Also remove the static IP/gateway for the non-vlan interface.

I ran into the issue that on reboot I lost access to the proxmox host, after switching the port to untagged and back to tagged I would get access to the host back.

It might have something to do with the configured VLANs on the interface (https://forum.proxmox.com/threads/no-vlan-connection-after-reboot.138548/) below I limit the VLANs to 2-100, it seems to work but only did a single reboot since, so will see if this solved it.

/etc/network/interfaces

auto lo
iface lo inet loopback

iface eno1 inet manual

auto vmbr0
iface vmbr0 inet manual
        bridge-ports eno1
        bridge-stp off
        bridge-fd 0
        bridge-vlan-aware yes
        bridge-vids 2-100

auto vmbr0.10
iface vmbr0.10 inet static
        address 192.168.1.8/24
        gateway 192.168.1.1
#IP for Host on VLAN 10

source /etc/network/interfaces.d/*

fstrim

When data gets deleted in an LXC container LVM-Thin disk, it is not released back to the host. To actually release the data, run pct fstrim <container ID> on the host.

I set up a systemd unit on the host to run pct fstrim every week. Most is a copy of /lib/systemd/system/fstrim.*

/etc/systemd/system/pct-fstrim.timer

[Unit]
Description=Discard unused LXC container blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container
ConditionPathExists=!/etc/initrd-release

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
RandomizedDelaySec=6000

[Install]
WantedBy=timers.target

/etc/systemd/system/pct-fstrim.service

[Unit]
Description=Discard unused blocks from LXC containers
Documentation=man:fstrim(8)
ConditionVirtualization=!container

[Service]
Type=oneshot
ExecStart=/bin/bash -c "/usr/sbin/pct list | /usr/bin/awk '/^[0-9]/ {print $1}' | /usr/bin/xargs --max-args 1 /usr/sbin/pct fstrim"
PrivateDevices=no
PrivateNetwork=no
PrivateUsers=no
ProtectKernelTunables=yes
ProtectKernelModules=yes
ProtectControlGroups=yes
MemoryDenyWriteExecute=yes
SystemCallFilter=@default @file-system @basic-io @system-service @mount

Enable and start the timer.
systemctl enable --now pct-fstrim.timer

LXC loses IP

It seems proxmox does not clean up container interfaces when adding then removing them in the GUI. I ran into an eth1 defined in /etc/network/interfaces within the container while that interface did not exist in the proxmox gui, resulting in errors in the networking.service unit on boot preventing the network interface from coming up.

Removing the entry manually solved the issue.

Proxmox Backup Server

PBS allows for more finegrained control over backups and can be used by other systems using the proxmox-backup-client.

Setup with NAS

Although "they" say this is not ideal, I want the backup server to utilize my NAS storage instead of giving it its own hardware. I run PBS in it's own LXC container on proxmox with the NFS share mounted inside the container.

Due to changes made to ownership permissions on init, an error will be show if you create a datastore directly from the NFS mount. What I did instead is:

  • unmount NFS
  • create the datastore at the mount location
  • move the resulting .chunks/ and .lock to another dir
  • mount the NFS
  • move back the files

This will complain about not being able to retain the file permissions due to the NFS user mapping (Squash) settings. regardless of the messages this will give you a working datastore.

Clone this wiki locally