Category Archives: Uncategorized

Unable to play audio on GNOME and Firefox

A while ago I was unable to play audio on GNOME and Firefox running on Arch Linux. When I tried to play a video on YouTube, the video would just pause. To get things immediately working was to toggle my audio devices from the onboard audio interface to an external audio interface and then back to the onboard audio interface. Another workaround for the time being was to toggle the device’s profile via pavucontrol.

This affected both my desktop and recently my T460s laptop. This issue popping up on my laptop was almost problematic as I was using laptop as a playback device for live sound. Luckily I was able to set the playback interface in Audacity and get through it for the time being.

At some point, something in the stack decided to go all in on PipeWire and I missed the memo. Per the ArchWiki, it turned out that I needed to install pipewire-pulse. What slowed things down was that I had to remove pulseaudio-bluetooth first which did not necessarily make sense to me at first since my desktop does not have Bluetooth capabilities.

TL;DR

If your Arch Linux stack has gone all in on PipeWire and media playback is not working as expected, there is a chance that pipewire-pulse needs to be installed, replacing pulseaudio. If you are blocked by pulseaudio-bluetooth, remove that first.

Containerizing my PHP application from the 2010s

I have a PHP application I started writing back in 2010 and worked on it until 2016. Years later, I want to containerize the application since that is what we do. Also, I want to revisit the application and make some improvements. When I first started the project, the development landscape was a lot different back then which also includes tooling.

Continue reading

Running a reverse proxy to serve services on the Internet

I have the occasional need to make a local/self-hosted service be reachable on the world wide web. However, I do not want to host them on my cloud VMs for reasons such as:

  • RAM: I am currently using the lowest-priced tier of VMs, which means that I get only 1 GB of RAM
  • Storage: For the similar reason as RAM, my disk is 25 GB
  • CPU: Having access to more than 1 core would be nice

Although the easy answer is to provision a bigger VM, I have a small Proxmox cluster that is more than capable of running VMs and (LXC) containers with access to more compute, RAM, and storage. Also, running them in separate instances is also great for separation.

While services like Tailscale Funnel or Cloudflare Tunnel exists, I wanted to roll my own as a learning exercise.

Architecture and Deployment

For this to work we need an Internet-accessible VM running Nginx and WireGuard. Clients like you and me would connect to this and Nginx will send requests/responses back and forth through the WireGuard tunnel.

To make a service available to/from the Internet we

  • set up a tunnel
  • configure the reverse proxy

Setting Up the Tunnel

For this section, we assume that we have a WireGuard set up on our server.

On our fresh instance, we install WireGuard (apt install wireguard) and generate our private and public keys and locating them in /root.

# cd /root
# (umask 0077; wg genkey > wg0.key)
# wg pubkey < wg0.key > wg0.pub

(WireGuard commands from WireGuard – ArchWiki)

On the WireGuard “server”, we update the config to add a new [Peer].

# Pre-existing WireGuard config

[Peer]
PublicKey  = Contents of /root/wg0.pub
AllowedIPs = 10.0.0.58/32

Restart the service with systemctl restart wg-quick@wg0.

On our new instance, we create the WireGuard config in /etc/wireguard/wg0.conf

[Interface]
Address    = 10.0.0.58/24
PrivateKey = Contents of /root/wg0.key

[Peer]
PublicKey  = WireGuard "server"'s public key
AllowedIPs = 10.0.0.1/32
Endpoint   = wg.example.net:51820
PersistentKeepalive = 59

We save the configuration and enable and start the service with systemctl enable --now wg-quick@wg0.

To confirm that the tunnel is working we ping the “server” on the instance and on the “server” we ping the instance.

Configuring the Reverse Proxy

Once Nginx is installed, we create a configuration for the site we are hosting. The following config assumes that we have an SSL certificate and the website/service on our instance is listening on port 8000.

server {
	listen 443 ssl;
	listen [::]:443 ssl;

	ssl_certificate /etc/acme.sh/project.example.net_ecc/fullchain.cer;
	ssl_certificate_key /etc/acme.sh/project.example.net_ecc/project.example.net.key;

	server_name project.example.net;

	location / {
		proxy_connect_timeout 3s;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_pass http://10.0.0.58:8000/;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
	}
}

For many applications, their documentation usually has a section on configuring a reverse proxy using Nginx. That means we can use that as a starting point and also get hints for extra things like supporting WebSockets and passing along the client’s IP address via X-Forwarded-For or X-Real-IP.

Once the Nginx site configuration is complete, restart Nginx confirm that we are able to connect to our service that is served through the WireGuard tunnel.

Final Thoughts

WireGuard Configuration Management

Running a WireGuard service is pretty straightforward. Managing the configuration with Ansible helps keep most things sorted. The one challenge I have is keeping the IP address allocations straight. There may be an abstraction layer to facilitate this better, but I have not found it yet.

Upload Speeds

Residential Internet providers provide asymmetric links such as 600 Mbps download and 20 Mbps upload. 20 Mbps upload is not really usable when you need to send back a lot of data such as photos. So one consideration is to look into some sort of object storage similar to AWS S3.

Uptime

When doing this sort of thing, uptime becomes something that needs to be really considered and it is a whole can of worms because when a service becomes (serious) production, we do not want too many outages due to things such as connectivity, power, hardware, etc.
If we can tolerate some downtime, we could migrate the service around or migrate it to a VM hosted in a real data center. That is one way to get ahead of the foreseeable downtime. On the other hand, should an unforeseeable outage due to things like hardware failure pop up, we are out of luck until that is resolved.

Onsite Stack Ansible Playbook

I have just published an Ansible playbook to deploy a stack with Docker, PhotoPrism, Samba, Sanoid, and ZFS. This stack was deployed and used in January 2025.

https://github.com/jonathanmtran/ais-onsite-stack

Background

For a retreat that took place in January 2025. I needed a server for the following functions:

  • File share to
    • store files (audio/video/slide decks/etc)
    • be an ingest target for photos and videos
  • Run a web-based photo viewer to facilitate
    • selecting photos for printing
    • curating photos for a (background) slideshow

For the requirement of a file share, the obvious choice was Samba since most of the clients are running Windows. There was also an Arch Linux client as well.

For a web-based photo viewer, we went with PhotoPrism. I wanted something that would simply import photos from a folder and display them. Immich may have worked as well, but here we are.

The data will reside on ZFS datasets. Sanoid will be configured to snapshot the datasets on a regular basis because things happen and we do not want to loose all of our data.

Development Process

Initially, the playbook was developed against an LXC container, but since we are planning to use ZFS, we switched the target host to a VM and kept iterating.

Deployment

NucBox G3 Plus in both lush green and titanium grey

Towards the end of the year, GMKtec’s NucBox G3 Plus went on sale for $130, so I picked that up and ordered a 32GB SODIMM, a Samsung 990 PRO 1TB NVMe drive, and a Dogfish 1TB M.2 SATA SSD.

Once all of the bits were in my hands, I swapped the RAM, NVMe, and added the M.2 SATA SSD. Debian was then installed to the M.2 SATA SSD.

Now that the foundation was in place we:

  • manually enabled the Debian Backports apt repository
  • installed ZFS
  • created the ZFS pool and datasets
  • updated the (Ansible) inventory
  • added our SSH key to the root user
  • (and finally) executed the playbook against the NucBox G3 Plus
  • (and really finally) ran smbpasswd -a to create the user in Samba

The number of steps above was 4 too many. Since we were not able to script the creation of a ZFS pool in Ansible (yet), that had to be done manually before really executing the playbook. Additionally, smbpasswd requires us to enter a password to use. Again, it might have been possible to script but we were running out of time to deploy.

Conclusion

In the field, the stack worked pretty well. The only hiccup we experienced was that (because reasons), the ZFS pool did not mount on boot. When it did mount, it was not mounting under /mnt/tank. Thankfully we were able to address that and move on.

One future improvement to the playbook would be to script the creation of the ZFS pool and the datasets. Luckily we noted our zpool and zfs invocations in README.md so creating them on the physical host was pretty trivial.

It was nice to have a file share containing this year’s files. It was also a place for us to dump files that we needed last-minute. Additionally, we had a copy of files from previous years which was nice-to-have for when we needed to get something from the past.

Having an ingest target was very helpful as we were able to dump over 10 GB of photos and videos have a viewer we could hand off to someone for curation while we did other things like minor adjustments to photos and the actual printing of photos.

Should I get invited back to assist with this retreat again, I would have this as a part of my broader tech stack.

Create a metadata XMP sidecar file using exiftool

I recently added a video from my phone into Immich, but it did not have the (approximate) correct date and time. Because I am leveraging Immich’s external library functionality and the Docker container’s mount point is read-only, Immich is unable to create the XMP sidecar.

The following exiftool command creates an XMP sidecar file with the desired CreateDate property. The resulting file gets copied to the directory that is designated as the external library. Finally I ran the discover job such that Immich picks up the XMP sidecar.

$ exiftool -tagsfromfile 20241231224307.m4v -xmp:CreateDate="2024:12:31 22:43.07.00-08:00" 20241231224307.m4v.xmp