-
Notifications
You must be signed in to change notification settings - Fork 0
ZFSLocalPoolCreation
This is fun: we are about to unleash ZFS capabilities on our nas machine. Fasten your seatbelts and get ready.
- Suppose that NAS has
/dev/sdaas root, and that 3 disks will be used to create araidzZFS vdev, equivalent to a RAID-5 configuration. - Install 3 unformatted disks. You do not even need to create a GPT partition table; ZFS tools will handle that too.
zpool create \ # command to create a pool
-o ashift=12 \ # alignment of data, read somewhere (!)
-f \ # forces use of vdevs
zfspool \ # name of the pool
raidz \ # use raidz1 strategy, similar to RAID 5 of mdadm: you can loose 1 disk.
/dev/disk/by-id/ata-VBOX_HARDDISK_VB33e4cb68-e66806e3 \ serial # of /dev/sdb
/dev/disk/by-id/ata-VBOX_HARDDISK_VB736b0a1e-d314dba7 \ serial # of /dev/sdc
/dev/disk/by-id/ata-VBOX_HARDDISK_VB3dfd818f-9b413e39 \ serial # of /dev/sdd
I created 3 (virtual) disks of 10GB each; the resulting pool will have 20GB of available space, since 10GB are used for parity.
Now we have a pool. Test it with zpool status:
user@nas:~$ zpool status
pool: zfspool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
zfspool ONLINE 0 0 0
raidz1-0 ONLINE 0 0 0
ata-VBOX_HARDDISK_VB33e4cb68-e66806e3 ONLINE 0 0 0
ata-VBOX_HARDDISK_VB736b0a1e-d314dba7 ONLINE 0 0 0
ata-VBOX_HARDDISK_VB3dfd818f-9b413e39 ONLINE 0 0 0
errors: No known data errors
- It is advisable to set the
xattr=saproperty at the pool level, so that it will be inherited from the filesystem(s) we will create on it. This prevents the storage of additional file attributes that would clutter differences and get in the way.
user@nas:~/src/zfs-backup$ sudo zfs set xattr=sa zfspool
user@nas:~/src/zfs-backup$ zfs get xattr
NAME PROPERTY VALUE SOURCE
zfspool xattr sa local
zfspool/Documents xattr sa inherited from zfspool
zfspool/Documents@2024.07.26-00.48.47 xattr sa inherited from zfspool
By default the pool will be mounted on /zfspool :
user@nas:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 512K 19.0G 128K /zfspool
user@nas:~$ df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 392M 4.2M 388M 2% /run
/dev/mapper/ubuntu--vg-ubuntu--lv 6.1G 3.0G 2.8G 52% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/sda2 1.7G 95M 1.5G 6% /boot
tmpfs 392M 12K 392M 1% /run/user/1000
zfspool 20G 128K 20G 1% /zfspool
I like to mount it under /mnt/raid, so let's create that mountpoint.
user@nas:~$ sudo mkdir -p /mnt/raid
user@nas:~$ sudo zfs set mountpoint=/mnt/raid zfspool
user@nas:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 607K 19.0G 128K /mnt/raid
Let's then create a filesystem that will hold our documents. I created many of them to separate music, photos, documents. Here we will just use one single filesystem; if you want more of them, just repeat for all the filesystems you wish to create.
user@nas:~$ sudo zfs create zfspool/Documents
user@nas:~$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zfspool 815K 19.0G 128K /mnt/raid
zfspool/Documents 128K 19.0G 128K /mnt/raid/Documents
Let's enable LZ4 compression on our filesystem. It is absolutely unnoticeable from the performance point of view and can save us some space with compressible files.
user@nas:~$ sudo zfs set compression=lz4 zfspool/Documents
user@nas:~$ zfs get compression zfspool/Documents
NAME PROPERTY VALUE SOURCE
zfspool/Documents compression lz4 local
Now we have our filesystem mounted on /mnt/raid/Documents. We need to make it accessible to the user user:
user@nas:/mnt$ ls -la
total 9
drwxr-xr-x 3 root root 4096 Jul 20 23:32 .
drwxr-xr-x 23 root root 4096 Jul 20 23:37 ..
drwxr-xr-x 3 root root 3 Jul 20 23:37 raid
user@nas:/mnt$ sudo chown user:user /mnt/raid
user@nas:/mnt$ ls -la
total 9
drwxr-xr-x 3 root root 4096 Jul 20 23:32 .
drwxr-xr-x 23 root root 4096 Jul 20 23:37 ..
drwxr-xr-x 3 user user 3 Jul 20 23:37 raid
user@nas:/mnt$ cd raid
user@nas:/mnt/raid$ ls -la
total 5
drwxr-xr-x 3 user user 3 Jul 20 23:37 .
drwxr-xr-x 3 root root 4096 Jul 20 23:32 ..
drwxr-xr-x 2 root root 2 Jul 20 23:37 Documents
user@nas:/mnt/raid$ sudo chown user:user Documents
user@nas:/mnt/raid$ ls -la
total 5
drwxr-xr-x 3 user user 3 Jul 20 23:37 .
drwxr-xr-x 3 root root 4096 Jul 20 23:32 ..
drwxr-xr-x 2 user user 2 Jul 20 23:37 Documents
user@nas:/mnt/raid$ cd Documents
user@nas:/mnt/raid/Documents$ ls -la
total 1
drwxr-xr-x 2 user user 2 Jul 20 23:37 .
drwxr-xr-x 3 user user 3 Jul 20 23:37 ..
user@nas:/mnt/raid/Documents$ touch foo
user@nas:/mnt/raid/Documents$ ls -la
total 2
drwxr-xr-x 2 user user 3 Jul 20 23:46 .
drwxr-xr-x 3 user user 3 Jul 20 23:37 ..
-rw-rw-r-- 1 user user 0 Jul 20 23:46 foo
user@nas:/mnt/raid/Documents$ rm foo
user@nas:/mnt/raid/Documents$ ls -la
total 1
drwxr-xr-x 2 user user 2 Jul 20 23:46 .
drwxr-xr-x 3 user user 3 Jul 20 23:37 ..
user@nas:/mnt/raid/Documents$
Looks like we have a writeable Documents filesystem under our ZFS raidz vdev, with 20GB of available disk space.
Now head on to the backup server to create the backup pool.
ZFS Backup, (c) 2024 Luca Finzi Contini - Use it at your own risk but enjoy doing so :)