VPS Remote Access Bastion using NixOS and WireGuard

#wireguard #nixos

2025-07-19

Introduction

I recently migrated to the Exetel One Plan. That 500Mbps was hard to turn down coming in at both a faster speed and lower cost than my current plan. There was one central issue though: CG-NAT.

Exetel's old plans allowed customers to opt-out of CG-NAT, which was key in my remote access setup. I forwarded port 51820 to my home-server on my router and very occasionally changed the A record of my DNS provider. This is no longer possible and so I need a new solution for remote access.

I originally chose a WireGuard based solution for it's smallest attack surface, ease of setup, and excellent performance; I standby that decision. I need a new method for gaining access to my home-server, and fast; I just got a new phone and want to hit the ground running with device data backups via Syncthing. So what do we do?

Approaches

There's a few options I've been playing with:

  • 1. VPS Bastion (Forwarding): A WireGuard-enabled VPS that forwards traffic to my home network.
  • 2. VPS Bastion (Hole Punching Broker): A WireGuard peer that facilitates NAT traversal and endpoint discovery, with peer-to-peer connection to the home server.
  • 3. IPv6 Direct Access: WireGuard over native IPv6 to my home server (requires IPv6-capable router and network).

Option 1 is the simplest and quickest to implement — just a new VPS and some config changes.

Option 2 is more complex, but potentially the highest performance. I imagine a setup where a Caddy server (on the VPS) sits behind an [[Authelia]] instance. After authentication, Caddy issues redirects to my home server (once a UDP hole-punched connection is established) via WireGuard. This would allow low-latency, direct access with minimal resource usage. I found a good reference for the hole-punching setup in 1.

Option 3 is possible, but not without additional hardware and networking reconfiguration, so it’s out for now.

For the time being, I think we go with Option 1, and migrate to option two at a later date as that gets us up and running sooner.

But wait. Why not just use something like Tailscale? Fair question, I absolutely could. Tailscale would be simple and fast to setup. But to me, that approach doesn't seem ultimately scalable or sovereign. I want a solution that is 100% independent, self-hosted, and reproducible — one I control end-to-end. My goal is to eventually have fully automated provisioning using NixOS configs, where everything from tunnel setup to access control is declarative and transparent.

So while Tailscale is technically viable, it doesn't align with the kind of system I want to build. Option 1 it is.

Bootstrapping a NixOS VPS

To kick off the remote access process, we need to setup a VPS running NixOS. I'm already running a few services on DigitalOcean so the plan is to setup a service here and eventually consolidate all my services to the single droplet to save costs.

DigitalOcean doesn't offer a NixOS image. In fact, very few VPS providers do. There is however, a community developed solution to this problem: [nixos-anywhere] nixos-anywhere. This a tool that allows for unattended setup and installation of NixOS via SSH on any Linux based machine. It is nothing short of excellent.

I followed this nix.dev guide for setting up my machine. I opted for a droplet with 2GB of RAM and 1vCPU, this was the cheapest I could get; the nixos-anywhere docs require a machine with at least 2GB of RAM. I additionally enabled IPv6.

I worked through the steps in the guide and managed to provision my machine. I was however unanble to SSH into it. It turned out that DigitalOcean droplets require their networking configuration be manually setup.

This proved a little troubling for me, but I here's how I got around it. While searching for options to provision NixOS on a VPS, I found this article which linked a repo called [nixos-infect][nixos-infectj]. It was similar in concept to NixOS anywhere, but far less reliable. I tried installing it a few times, but each time I would end up with a broken NixOS installation; I couldn't rebuild switch and update my configs. But interestingly networking worked. The nixos-infect script spat out a little networking.nix configuration file. This little fact was stored away in my brain while I had moved on to working with nixos-anywhere.

So when networking didn't work I had a little light-bulb moment. Why not use the networking config generation script to generate the networking.nix config and import that into our nixos-anywhere installation? Magic .

I pulled the networking config generation out of the project and into bash script:

makeNetworkingConf() { # XXX It'd be better if we used procfs for all this... local IFS=$'\n' eth0_name=$(ip address show | grep '^2:' | awk -F': ' '{print $2}') eth0_ip4s=$(ip address show dev "$eth0_name" | grep 'inet ' | sed -r 's|.*inet ([0-9.]+)/([0-9]+).*|{ address="\1"; prefixLength=\2; }|') eth0_ip6s=$(ip address show dev "$eth0_name" | grep 'inet6 ' | sed -r 's|.*inet6 ([0-9a-f:]+)/([0-9]+).*|{ address="\1"; prefixLength=\2; }|' || '') gateway=$(ip route show dev "$eth0_name" | grep default | sed -r 's|default via ([0-9.]+).*|\1|') gateway6=$(ip -6 route show dev "$eth0_name" | grep default | sed -r 's|default via ([0-9a-f:]+).*|\1|' || true) ether0=$(ip address show dev "$eth0_name" | grep link/ether | sed -r 's|.*link/ether ([0-9a-f:]+) .*|\1|') eth1_name=$(ip address show | grep '^3:' | awk -F': ' '{print $2}')||true if [ -n "$eth1_name" ];then eth1_ip4s=$(ip address show dev "$eth1_name" | grep 'inet ' | sed -r 's|.*inet ([0-9.]+)/([0-9]+).*|{ address="\1"; prefixLength=\2; }|') eth1_ip6s=$(ip address show dev "$eth1_name" | grep 'inet6 ' | sed -r 's|.*inet6 ([0-9a-f:]+)/([0-9]+).*|{ address="\1"; prefixLength=\2; }|' || '') ether1=$(ip address show dev "$eth1_name" | grep link/ether | sed -r 's|.*link/ether ([0-9a-f:]+) .*|\1|') interfaces1=$(cat << EOF $eth1_name = { ipv4.addresses = [$(for a in "${eth1_ip4s[@]}"; do echo -n " $a"; done) ]; ipv6.addresses = [$(for a in "${eth1_ip6s[@]}"; do echo -n " $a"; done) ]; }; EOF ) extraRules1="ATTR{address}==\"${ether1}\", NAME=\"${eth1_name}\"" else interfaces1="" extraRules1="" fi readarray nameservers < <(grep ^nameserver /etc/resolv.conf | sed -r \ -e 's/^nameserver[[:space:]]+([0-9.a-fA-F:]+).*/"\1"/' \ -e 's/127[0-9.]+/8.8.8.8/' \ -e 's/::1/8.8.8.8/' ) if [[ "$eth0_name" = eth* ]]; then predictable_inames="usePredictableInterfaceNames = lib.mkForce false;" else predictable_inames="usePredictableInterfaceNames = lib.mkForce true;" fi cat > networking.nix << EOF { lib, ... }: { # This file was populated at runtime with the networking # details gathered from the active system. networking = { nameservers = [ ${nameservers[@]} ]; defaultGateway = "${gateway}"; defaultGateway6 = { address = "${gateway6}"; interface = "${eth0_name}"; }; dhcpcd.enable = false; $predictable_inames interfaces = { $eth0_name = { ipv4.addresses = [$(for a in "${eth0_ip4s[@]}"; do echo -n " $a"; done) ]; ipv6.addresses = [$(for a in "${eth0_ip6s[@]}"; do echo -n " $a"; done) ]; ipv4.routes = [ { address = "${gateway}"; prefixLength = 32; } ]; ipv6.routes = [ { address = "${gateway6}"; prefixLength = 128; } ]; }; $interfaces1 }; }; services.udev.extraRules = '' ATTR{address}=="${ether0}", NAME="${eth0_name}" $extraRules1 ''; } EOF } makeNetworkingConf

I created a new Ubuntu droplet, copied the script across and ran it. I imported the networking.nix file into my nixos-anywhere config:

{ modulesPath, ... }: let diskDevice = "/dev/vda"; sources = import ./npins; in { imports = [ (modulesPath + "/profiles/qemu-guest.nix") (sources.disko + "/module.nix") ./single-disk-layout.nix + ./networking.nix ]; disko.devices.disk.main.device = diskDevice; boot.loader.grub = { devices = [ diskDevice ]; efiSupport = true; efiInstallAsRemovable = true; }; services.openssh.enable = true; users.users.root.openssh.authorizedKeys.keys = [ ... ]; system.stateVersion = "24.11"; }

I re-ran the installation process:

toplevel=$(nixos-rebuild build --no-flake) \ diskoScript=$(nix-build -E "((import <nixpkgs> {}).nixos [ ./configuration.nix ]).diskoScript") \ nixos-anywhere --store-paths "$diskoScript" "$toplevel" root@my-droplet-ip

And that was it. SSH access worked. We had successfully bootstrapped NixOS onto our VPS. Onto the WireGuard setup.

WireGuard Bastion Setup

The WireGuard bastion setup proved a little challenging due to my lack of deep understanding of the application; I had only used it for point-to-site connections in the past. I found this [how-to][wireguard-sharing-bastion] and it helped me get everything up and running. It didn't really do an excellent job of explaining the why, so my plan is to do that here.

First, the goal: two machines (remotes) communicating via another machine (bastion). Each machine runs an instance of Wireguard. There are two styles of WireGuard configurations in this setup, the remote configs and the bastion configs. The remote configs are nearly identical having only different keys.

The bastion config is shown below (written in nix):

{config, ...}: { networking = { wireguard.enable = true; wireguard.interfaces = { wg0 = { ips = ["10.0.0.1/24"]; listenPort = 51820; privateKeyFile = "/etc/wireguard/private.key"; peers = [ { # Pixel 8a publicKey = "..."; allowedIPs = ["10.0.0.2/32"]; } { # Myshkin publicKey = "..."; allowedIPs = ["10.0.0.3/32"]; } ]; }; }; firewall.allowedUDPPorts = [51820]; }; }

Note how this is setup, we have a list of peers, each assigned a fully-qualified IP on the subnet: 10.0.0.x/32. Note also the IPs assigned to our interface: 10.0.0.1/24. In practice this means that our bastion can received traffic from any of the defined clients.

Now, look at a remote style config:

{config, ...}: { networking = { wireguard.enable = true; wireguard.interfaces = { wg0 = { # Determines the IP address and subnet of the server's end of the tunnel interface. ips = ["10.0.0.3/24"]; listenPort = 51820; privateKeyFile = config.sops.secrets."wireguard/myshkin/private-key".path; peers = [ { # Shatov Jump Host endpoint = "ip-and-port-of-our-bastion"; publicKey = "...": allowedIPs = ["10.0.0.0/24"]; persistentKeepalive = 25; } ]; }; }; firewall.allowedUDPPorts = [51820]; }; }

Note in particular the allowedIPs field. It is set to 10.0.0.0/24. This means that it will allow any traffic on the 10.0.0.0/24 subnet, including our remote devices, which this peer has no knowledge of. The remotes are not remote-aware, but they will accept traffic if it arrives from the bastion. Note also that I've setup a persistentKeepAlive so that my home-server is always connected to the bastion.

All that's left now is to explicitly enable IPv4 forwarding on the VPS. This can be done by adding the following to your NixOS config.

boot.kernel.sysctl = { "net.ipv4.conf.all.forwarding" = true; }

And that's it. With a config like this, you have a simple and secure method for accessing your home server behind a NAT from any remote device. Now I just need to setup Syncthing.