Tailscale Changed How I Think About Networking
For a long time, getting two machines on different networks to talk to each other meant picking your pain. Static routes. Port forwarding rules. VPN server maintenance. Certificate rotation. It worked, but the surface area for misconfiguration was large enough that I spent more time managing the network than using it.
Tailscale eliminated that entirely. Not by simplifying the configuration, but by changing the model.
How it actually works
Tailscale builds a private overlay network using WireGuard as the encryption layer. Every device gets a WireGuard keypair. Private keys never leave the originating device. That's not a marketing claim, it's a structural property of how the protocol works. The Tailscale coordination server acts as a shared public key directory: devices register their public keys, and the coordination server tells peers how to find each other. After that exchange, traffic flows directly between devices, peer-to-peer, encrypted. The coordination server never touches your data.
The NAT traversal is where most of the engineering lives. WireGuard on its own doesn't punch through NAT well. Tailscale handles this with a combination of STUN-style direct connection attempts and DERP relay servers as fallback. In practice, most connections end up direct. When they can't, DERP relays the traffic without decrypting it.
This is what zero-trust actually means in this context: every node is authenticated individually via its keypair, no node is trusted by virtue of network position alone, and access is enforced per-connection rather than per-subnet.
Why traditional VPNs fail here
Most self-hosted VPN setups follow the same pattern. One machine is the VPN server. All traffic routes through it. Every new device needs the server's certificate, a config file, a firewall rule. When the server goes down, the whole overlay goes down. When you want two nodes to talk directly, you're working against the architecture.
For a homelab across multiple physical locations (home network, a cloud VPS, a friend's spare machine, a laptop on a cellular connection), hub-and-spoke routing adds latency for every hop and creates a single point of failure that defeats the purpose of having distributed infrastructure.
Tailscale's mesh model means every node is both a peer and a router. There is no central traffic choke point.
What it actually enables
Every node I run (Proxmox host, Kubernetes workers, Docker machines, a VPS, my laptop) has a Tailscale IP in the 100.x.x.x range and behaves as if it's on the same LAN regardless of where it physically is. Services can reference each other by stable Tailscale IPs without DNS gymnastics or hardcoded public IPs.
The concrete example: before Tailscale, accessing my Proxmox web UI from outside my home network required either punching a hole in the firewall with a port forward or running a full VPN session. Both had tradeoffs around exposure and friction. Now I just hit the Tailscale IP directly, connection is direct, encrypted, authenticated. No firewall rule. No exposed port. The UI is not reachable from the public internet at all.
The same pattern applies to every internal service: Kubernetes API server, container registries, monitoring dashboards, the Ollama inference API. All reachable over Tailscale, none exposed publicly.
The abstraction lesson
Tailscale doesn't make networking disappear. It makes the right parts invisible. The WireGuard encryption and key management are handled and correct. The NAT traversal is handled and mostly direct. What remains visible is the part that matters: which nodes are in my network and what they can reach.
This is the distinction that matters with networking abstractions. A VPN that routes all traffic through a central server hides the wrong things. It makes the topology look flat while adding a hidden dependency and a latency cost. Tailscale hides the key exchange and NAT traversal while keeping the topology explicit and observable.
Once I understood that, I stopped thinking about "is this behind a VPN" and started thinking about "what Tailscale IPs can reach what." That mental model is cleaner and maps better to how the infrastructure actually behaves.
Sources
- Tailscale, "How Tailscale Works." https://tailscale.com/blog/how-tailscale-works
- Tailscale, "NAT Traversal." https://tailscale.com/blog/how-nat-traversal-works
- WireGuard, Official documentation. https://www.wireguard.com
- Tailscale, "What are DERP servers?" https://tailscale.com/kb/1232/derp-servers