Pi-hole v6 was recently released and adds support for HTTPS. In /etc/pihole/pihole.toml under the webserver.tls configuration block, the documentation mentions that Pi-hole expects the certificate and the key in the same .pem file.
Monthly Archives: February 2025
My 2025 updated approach to organizing photos
Taking photos is fun, but organizing them might not be as much fun. Here is a snapshot of my approach to ingesting and organizing photos.
Continue readingRunning a reverse proxy to serve services on the Internet
I have the occasional need to make a local/self-hosted service be reachable on the world wide web. However, I do not want to host them on my cloud VMs for reasons such as:
- RAM: I am currently using the lowest-priced tier of VMs, which means that I get only 1 GB of RAM
- Storage: For the similar reason as RAM, my disk is 25 GB
- CPU: Having access to more than 1 core would be nice
Although the easy answer is to provision a bigger VM, I have a small Proxmox cluster that is more than capable of running VMs and (LXC) containers with access to more compute, RAM, and storage. Also, running them in separate instances is also great for separation.
While services like Tailscale Funnel or Cloudflare Tunnel exists, I wanted to roll my own as a learning exercise.
Continue readingOnsite Stack Ansible Playbook
I have just published an Ansible playbook to deploy a stack with Docker, PhotoPrism, Samba, Sanoid, and ZFS. This stack was deployed and used in January 2025.
https://github.com/jonathanmtran/ais-onsite-stack
Background
For a retreat that took place in January 2025. I needed a server for the following functions:
- File share to
- store files (audio/video/slide decks/etc)
- be an ingest target for photos and videos
- Run a web-based photo viewer to facilitate
- selecting photos for printing
- curating photos for a (background) slideshow
For the requirement of a file share, the obvious choice was Samba since most of the clients are running Windows. There was also an Arch Linux client as well.
For a web-based photo viewer, we went with PhotoPrism. I wanted something that would simply import photos from a folder and display them. Immich may have worked as well, but here we are.
The data will reside on ZFS datasets. Sanoid will be configured to snapshot the datasets on a regular basis because things happen and we do not want to loose all of our data.
Development Process
Initially, the playbook was developed against an LXC container, but since we are planning to use ZFS, we switched the target host to a VM and kept iterating.
Deployment

Towards the end of the year, GMKtec’s NucBox G3 Plus went on sale for $130, so I picked that up and ordered a 32GB SODIMM, a Samsung 990 PRO 1TB NVMe drive, and a Dogfish 1TB M.2 SATA SSD.
Once all of the bits were in my hands, I swapped the RAM, NVMe, and added the M.2 SATA SSD. Debian was then installed to the M.2 SATA SSD.
Now that the foundation was in place we:
- manually enabled the Debian Backports apt repository
- installed ZFS
- created the ZFS pool and datasets
- updated the (Ansible) inventory
- added our SSH key to the root user
- (and finally) executed the playbook against the NucBox G3 Plus
- (and really finally) ran
smbpasswd -ato create the user in Samba
The number of steps above was 4 too many. Since we were not able to script the creation of a ZFS pool in Ansible (yet), that had to be done manually before really executing the playbook. Additionally, smbpasswd requires us to enter a password to use. Again, it might have been possible to script but we were running out of time to deploy.
Conclusion
In the field, the stack worked pretty well. The only hiccup we experienced was that (because reasons), the ZFS pool did not mount on boot. When it did mount, it was not mounting under /mnt/tank. Thankfully we were able to address that and move on.
One future improvement to the playbook would be to script the creation of the ZFS pool and the datasets. Luckily we noted our zpool and zfs invocations in README.md so creating them on the physical host was pretty trivial.
It was nice to have a file share containing this year’s files. It was also a place for us to dump files that we needed last-minute. Additionally, we had a copy of files from previous years which was nice-to-have for when we needed to get something from the past.
Having an ingest target was very helpful as we were able to dump over 10 GB of photos and videos have a viewer we could hand off to someone for curation while we did other things like minor adjustments to photos and the actual printing of photos.
Should I get invited back to assist with this retreat again, I would have this as a part of my broader tech stack.