Tag Archives: Proxmox

Running a reverse proxy to serve services on the Internet

I have the occasional need to make a local/self-hosted service be reachable on the world wide web. However, I do not want to host them on my cloud VMs for reasons such as:

  • RAM: I am currently using the lowest-priced tier of VMs, which means that I get only 1 GB of RAM
  • Storage: For the similar reason as RAM, my disk is 25 GB
  • CPU: Having access to more than 1 core would be nice

Although the easy answer is to provision a bigger VM, I have a small Proxmox cluster that is more than capable of running VMs and (LXC) containers with access to more compute, RAM, and storage. Also, running them in separate instances is also great for separation.

While services like Tailscale Funnel or Cloudflare Tunnel exists, I wanted to roll my own as a learning exercise.

Architecture and Deployment

For this to work we need an Internet-accessible VM running Nginx and WireGuard. Clients like you and me would connect to this and Nginx will send requests/responses back and forth through the WireGuard tunnel.

To make a service available to/from the Internet we

  • set up a tunnel
  • configure the reverse proxy

Setting Up the Tunnel

For this section, we assume that we have a WireGuard set up on our server.

On our fresh instance, we install WireGuard (apt install wireguard) and generate our private and public keys and locating them in /root.

# cd /root
# (umask 0077; wg genkey > wg0.key)
# wg pubkey < wg0.key > wg0.pub

(WireGuard commands from WireGuard – ArchWiki)

On the WireGuard “server”, we update the config to add a new [Peer].

# Pre-existing WireGuard config

[Peer]
PublicKey  = Contents of /root/wg0.pub
AllowedIPs = 10.0.0.58/32

Restart the service with systemctl restart wg-quick@wg0.

On our new instance, we create the WireGuard config in /etc/wireguard/wg0.conf

[Interface]
Address    = 10.0.0.58/24
PrivateKey = Contents of /root/wg0.key

[Peer]
PublicKey  = WireGuard "server"'s public key
AllowedIPs = 10.0.0.1/32
Endpoint   = wg.example.net:51820
PersistentKeepalive = 59

We save the configuration and enable and start the service with systemctl enable --now wg-quick@wg0.

To confirm that the tunnel is working we ping the “server” on the instance and on the “server” we ping the instance.

Configuring the Reverse Proxy

Once Nginx is installed, we create a configuration for the site we are hosting. The following config assumes that we have an SSL certificate and the website/service on our instance is listening on port 8000.

server {
	listen 443 ssl;
	listen [::]:443 ssl;

	ssl_certificate /etc/acme.sh/project.example.net_ecc/fullchain.cer;
	ssl_certificate_key /etc/acme.sh/project.example.net_ecc/project.example.net.key;

	server_name project.example.net;

	location / {
		proxy_connect_timeout 3s;
		proxy_set_header Host $host;
		proxy_set_header X-Real-IP $remote_addr;
		proxy_pass http://10.0.0.58:8000/;
		proxy_http_version 1.1;
		proxy_set_header Upgrade $http_upgrade;
	}
}

For many applications, their documentation usually has a section on configuring a reverse proxy using Nginx. That means we can use that as a starting point and also get hints for extra things like supporting WebSockets and passing along the client’s IP address via X-Forwarded-For or X-Real-IP.

Once the Nginx site configuration is complete, restart Nginx confirm that we are able to connect to our service that is served through the WireGuard tunnel.

Final Thoughts

WireGuard Configuration Management

Running a WireGuard service is pretty straightforward. Managing the configuration with Ansible helps keep most things sorted. The one challenge I have is keeping the IP address allocations straight. There may be an abstraction layer to facilitate this better, but I have not found it yet.

Upload Speeds

Residential Internet providers provide asymmetric links such as 600 Mbps download and 20 Mbps upload. 20 Mbps upload is not really usable when you need to send back a lot of data such as photos. So one consideration is to look into some sort of object storage similar to AWS S3.

Uptime

When doing this sort of thing, uptime becomes something that needs to be really considered and it is a whole can of worms because when a service becomes (serious) production, we do not want too many outages due to things such as connectivity, power, hardware, etc.
If we can tolerate some downtime, we could migrate the service around or migrate it to a VM hosted in a real data center. That is one way to get ahead of the foreseeable downtime. On the other hand, should an unforeseeable outage due to things like hardware failure pop up, we are out of luck until that is resolved.

Proxmox VE and Let’s Encrypt with DNS-01 Validation

One of the appealing reasons for using Proxmox VE as your hypervisor is that one can configure their system to obtain a TLS certificate for https from Let’s Encrypt on a regular basis.

The Environment

At the time of writing, I am running Proxmox VE version 7.2-4. The name of the node for this article will be pve.
I have a dynamic DNS zone (i.e. acme.example.net) running BIND for the purposes of enabling ACME clients (acme.sh) to update the dynamic zone with the appropriate TXT record. A CNAME will be created in the “top-level” zone (example.net) such that querying _acme-challenge.pve.example.net will be answered by _acme-challenge.pve.acme.example.net.

Configuration

In PVE, go to Datacenter > ACME and then click Add under Accounts to register an ACME account.

The next step is to add a Challenge Plugin. On the same screen click Add under Challenge Plugins.

Plugin ID: nsupdate
Validation Delay: 30 (default)
DNS API: nsupdate (RFC 2136)
NSUPDATE_KEY=/var/lib/pve/nsupdate.key
NSUPDATE_SERVER=acme.ns.example.net

Since I am using nsupdate as the DNS API, I generate a key locally:

$ ddns-confgen -a hmac-sha256 -k pve.example.net. -q > pve.key 

Transfer the key to PVE to the location specified in NSUPDATE_KEY. Below are the user/group and permissions for reference.

# ls -l /var/lib/pve/nsupdate.key
-rw-r--r-- 1 root root 128 Jun 21 19:43 /var/lib/pve/nsupdate.key

Now go to the node itself under Datacenter. Go to System > Certificates. Under ACME, click Add.
Select DNS as the Challenge Type, select nsupdate as the plugin, and enter the PVE host’s domain.

Since we have delegated the actual records to another DNS zone, we need to make one small change to the (PVE) node’s configuration. Under the DNS Validation through CNAME Alias of the documentation:

set the alias property in the Proxmox VE node configuration file

https://pve.proxmox.com/wiki/Certificate_Management#sysadmin_certs_acme_dns_challenge

To do that, I ssh-ed into the node (as root), opened /etc/pve/local/config in nano, and added alias=pve.acme.example.net to the end of the line that has the domain (in my case, it was the line that started with acmedomain0)

# cat /etc/pve/local/config
acme: account=default
acmedomain0: pve.example.net,plugin=nsupdate,alias=pve.acme.example.net

Save (CTRL+O) and Exit (CTRL+X)

Back in the web interface, in the Certificates screen (Datacenter > Your node (pve) > System > Certificates) you should be able to select the domain and click Order Certificates Now.

At this point PVE should be able to create a TXT _acme-challenge record in the (delegated) dynamic DNS zone, Let’s Encrypt should be able to validate it, and we should be able to get an TLS certificate for https.