I have the occasional need to make a local/self-hosted service be reachable on the world wide web. However, I do not want to host them on my cloud VMs for reasons such as:
- RAM: I am currently using the lowest-priced tier of VMs, which means that I get only 1 GB of RAM
- Storage: For the similar reason as RAM, my disk is 25 GB
- CPU: Having access to more than 1 core would be nice
Although the easy answer is to provision a bigger VM, I have a small Proxmox cluster that is more than capable of running VMs and (LXC) containers with access to more compute, RAM, and storage. Also, running them in separate instances is also great for separation.
While services like Tailscale Funnel or Cloudflare Tunnel exists, I wanted to roll my own as a learning exercise.
Architecture and Deployment
For this to work we need an Internet-accessible VM running Nginx and WireGuard. Clients like you and me would connect to this and Nginx will send requests/responses back and forth through the WireGuard tunnel.
To make a service available to/from the Internet we
- set up a tunnel
- configure the reverse proxy
Setting Up the Tunnel
For this section, we assume that we have a WireGuard set up on our server.
On our fresh instance, we install WireGuard (apt install wireguard
) and generate our private and public keys and locating them in /root
.
# cd /root
# (umask 0077; wg genkey > wg0.key)
# wg pubkey < wg0.key > wg0.pub
(WireGuard commands from WireGuard – ArchWiki)
On the WireGuard “server”, we update the config to add a new [Peer]
.
# Pre-existing WireGuard config
[Peer]
PublicKey = Contents of /root/wg0.pub
AllowedIPs = 10.0.0.58/32
Restart the service with systemctl restart wg-quick@wg0
.
On our new instance, we create the WireGuard config in /etc/wireguard/wg0.conf
[Interface]
Address = 10.0.0.58/24
PrivateKey = Contents of /root/wg0.key
[Peer]
PublicKey = WireGuard "server"'s public key
AllowedIPs = 10.0.0.1/32
Endpoint = wg.example.net:51820
PersistentKeepalive = 59
We save the configuration and enable and start the service with systemctl enable --now wg-quick@wg0
.
To confirm that the tunnel is working we ping the “server” on the instance and on the “server” we ping the instance.
Configuring the Reverse Proxy
Once Nginx is installed, we create a configuration for the site we are hosting. The following config assumes that we have an SSL certificate and the website/service on our instance is listening on port 8000.
server {
listen 443 ssl;
listen [::]:443 ssl;
ssl_certificate /etc/acme.sh/project.example.net_ecc/fullchain.cer;
ssl_certificate_key /etc/acme.sh/project.example.net_ecc/project.example.net.key;
server_name project.example.net;
location / {
proxy_connect_timeout 3s;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_pass http://10.0.0.58:8000/;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
}
}
For many applications, their documentation usually has a section on configuring a reverse proxy using Nginx. That means we can use that as a starting point and also get hints for extra things like supporting WebSockets and passing along the client’s IP address via X-Forwarded-For
or X-Real-IP
.
Once the Nginx site configuration is complete, restart Nginx confirm that we are able to connect to our service that is served through the WireGuard tunnel.
Final Thoughts
WireGuard Configuration Management
Running a WireGuard service is pretty straightforward. Managing the configuration with Ansible helps keep most things sorted. The one challenge I have is keeping the IP address allocations straight. There may be an abstraction layer to facilitate this better, but I have not found it yet.
Upload Speeds
Residential Internet providers provide asymmetric links such as 600 Mbps download and 20 Mbps upload. 20 Mbps upload is not really usable when you need to send back a lot of data such as photos. So one consideration is to look into some sort of object storage similar to AWS S3.
Uptime
When doing this sort of thing, uptime becomes something that needs to be really considered and it is a whole can of worms because when a service becomes (serious) production, we do not want too many outages due to things such as connectivity, power, hardware, etc.
If we can tolerate some downtime, we could migrate the service around or migrate it to a VM hosted in a real data center. That is one way to get ahead of the foreseeable downtime. On the other hand, should an unforeseeable outage due to things like hardware failure pop up, we are out of luck until that is resolved.