Tag Archives: zfs

Onsite Stack Ansible Playbook

I have just published an Ansible playbook to deploy a stack with Docker, PhotoPrism, Samba, Sanoid, and ZFS. This stack was deployed and used in January 2025.

https://github.com/jonathanmtran/ais-onsite-stack

Background

For a retreat that took place in January 2025. I needed a server for the following functions:

  • File share to
    • store files (audio/video/slide decks/etc)
    • be an ingest target for photos and videos
  • Run a web-based photo viewer to facilitate
    • selecting photos for printing
    • curating photos for a (background) slideshow

For the requirement of a file share, the obvious choice was Samba since most of the clients are running Windows. There was also an Arch Linux client as well.

For a web-based photo viewer, we went with PhotoPrism. I wanted something that would simply import photos from a folder and display them. Immich may have worked as well, but here we are.

The data will reside on ZFS datasets. Sanoid will be configured to snapshot the datasets on a regular basis because things happen and we do not want to loose all of our data.

Development Process

Initially, the playbook was developed against an LXC container, but since we are planning to use ZFS, we switched the target host to a VM and kept iterating.

Deployment

NucBox G3 Plus in both lush green and titanium grey

Towards the end of the year, GMKtec’s NucBox G3 Plus went on sale for $130, so I picked that up and ordered a 32GB SODIMM, a Samsung 990 PRO 1TB NVMe drive, and a Dogfish 1TB M.2 SATA SSD.

Once all of the bits were in my hands, I swapped the RAM, NVMe, and added the M.2 SATA SSD. Debian was then installed to the M.2 SATA SSD.

Now that the foundation was in place we:

  • manually enabled the Debian Backports apt repository
  • installed ZFS
  • created the ZFS pool and datasets
  • updated the (Ansible) inventory
  • added our SSH key to the root user
  • (and finally) executed the playbook against the NucBox G3 Plus
  • (and really finally) ran smbpasswd -a to create the user in Samba

The number of steps above was 4 too many. Since we were not able to script the creation of a ZFS pool in Ansible (yet), that had to be done manually before really executing the playbook. Additionally, smbpasswd requires us to enter a password to use. Again, it might have been possible to script but we were running out of time to deploy.

Conclusion

In the field, the stack worked pretty well. The only hiccup we experienced was that (because reasons), the ZFS pool did not mount on boot. When it did mount, it was not mounting under /mnt/tank. Thankfully we were able to address that and move on.

One future improvement to the playbook would be to script the creation of the ZFS pool and the datasets. Luckily we noted our zpool and zfs invocations in README.md so creating them on the physical host was pretty trivial.

It was nice to have a file share containing this year’s files. It was also a place for us to dump files that we needed last-minute. Additionally, we had a copy of files from previous years which was nice-to-have for when we needed to get something from the past.

Having an ingest target was very helpful as we were able to dump over 10 GB of photos and videos have a viewer we could hand off to someone for curation while we did other things like minor adjustments to photos and the actual printing of photos.

Should I get invited back to assist with this retreat again, I would have this as a part of my broader tech stack.

can’t change attributes MNT_DEFEXPORTED already set for mount

When restarting my TrueNAS (CORE) box, I got a few of the following errors for a few of my datasets:

Apr 23 02:11:37 truenas 1 2022-04-23T02:11:37.549270+00:00 truenas.local mountd 1176 - - can't change attributes for /mnt/tank/user/jmtran: MNT_DEFEXPORTED already set for mount 0xfffff8016cc87000
Apr 23 02:11:37 truenas 1 2022-04-23T02:11:37.549276+00:00 truenas.local mountd 1176 - - bad exports list line '/mnt/tank/user/jmtran'

They were not show-stoppers, but they were slightly annoying. Since exports is a NFS thing, let’s take a look at the sharenfs property

$ zfs get -t filesystem sharenfs                         
NAME                                                   PROPERTY  VALUE     SOURCE
...
tank/user/jmtran                                       sharenfs  on        local

The source of the sharenfs property for some datasets were either local or received. This was because tank was from a Linux system and I had manually set sharenfs to share the dataset via NFS.

To made TrueNAS happy, I did the following such that the source of the property became default

# zfs inherit sharenfs tank/user/jmtran
# zfs get -t filesystem sharenfs tank/user/jmtran
NAME              PROPERTY  VALUE     SOURCE
tank/user/jmtran  sharenfs  off       default

Yet another post about setting up alerts

tl;dr Set up e-mail alerts so that you are alerted to when things (start to) go wrong.

This morning, I got an email from my FreeNAS server with the following:

New alerts:
* Boot Pool Status Is ONLINE: One or more devices has experienced an unrecoverable error. An attempt was made to correct the error. Applications are unaffected.

Logging in to the server, I ran zpool status freenas-boot to see the state of the pool:

# zpool status
  pool: freenas-boot
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
	attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
	using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://illumos.org/msg/ZFS-8000-9P
  scan: scrub repaired 98.5K in 0 days 01:08:27 with 0 errors on Wed May 27 04:53:27 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	freenas-boot                                    ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/44bef123-fee5-11e4-9e92-0cc47a4a5aff  ONLINE       0     0     1
	    ada0p2                                      ONLINE       0     0     0  block size: 512B configured, 4096B native

errors: No known data errors

Luckily, this pool is made up of a mirrored vdev: a USB flash drive (gptid/44bef123-fee5-11e4-9e92-0cc47a4a5aff) and an SSD (ada0/p2). Per zpool status, I ran zpool clear to clear the error. The next scrub will be towards the end of the month (June), so we will see if this error comes back. In the meantime, I have started to look into replacing the flash drive with a SATA DOM.

# zpool clear freenas-boot

# zpool status freenas-boot
  pool: freenas-boot
 state: ONLINE
status: One or more devices are configured to use a non-native block size.
	Expect reduced performance.
action: Replace affected devices with devices that support the
	configured block size, or migrate data to a properly configured
	pool.
  scan: scrub repaired 98.5K in 0 days 01:08:27 with 0 errors on Wed May 27 04:53:27 2020
config:

	NAME                                            STATE     READ WRITE CKSUM
	freenas-boot                                    ONLINE       0     0     0
	  mirror-0                                      ONLINE       0     0     0
	    gptid/44bef123-fee5-11e4-9e92-0cc47a4a5aff  ONLINE       0     0     0
	    ada0p2                                      ONLINE       0     0     0  block size: 512B configured, 4096B native

errors: No known data errors

The moral of this story: set up email alerts or some kind of alerting system to alert you when things (start to) go wrong.