Lesson 12
Talking to Legacy Apps
Everything so far has assumed the application speaks FIPS natively: call open_session(npub), push datagrams. But almost nothing outside the FIPS codebase does that. SSH wants an IP
address. curl wants a hostname that resolves to one. scp, browsers, every random daemon already deployed, they all want BSD sockets over IPv6.
FIPS bridges that gap in two pieces. The IPv6 adapter runs inside the daemon and makes the
local node look like an IPv6 endpoint to every application on the host. The fips-gateway
sidecar extends the same trick to LAN hosts that do not run FIPS at all.
The adapter: a TUN with a hash problem
An FIPS IPv6 address is just 0xfd prepended to the first 15 bytes of
the 16-byte node_addr, which is itself a SHA-256 truncation of the
public key. The result lives in fd00::/8, which is ULA space (RFC
4193) so it cannot collide with real IPv6 traffic.
The adapter creates a TUN device called fips0, assigns this
address to it, and asks the kernel to route
fd00::/8 through the TUN. When an app opens a socket to an fd00:: peer, the packet lands in the adapter's reader thread.
Now comes the awkward part. The adapter has a destination IPv6 address from the kernel, but
the address is a hash. There is no way back to the public key. Without the public key there
is no
node_addr to route on, and nothing can happen.
DNS as the side door
The fix is to populate a cache before the packet arrives. The FIPS daemon runs a DNS
resolver that recognizes
npub1...xxx.fips names. When an application resolves such a name, the
resolver:
- Extracts the bech32 npub from the name.
- Derives the fd00::/8 address.
-
Writes the triple
(fd00:: prefix, node_addr, pubkey)into the identity cache. - Returns the address to the app.
By the time the kernel hands the packet to
fips0, the cache already knows who to route to. If an app bypasses
DNS (hard-coded address, older cache entry outside this daemon), the adapter returns ICMPv6
Destination Unreachable. There is no way to recover the pubkey locally.
The cache is LRU only, default cap 10,000
entries, no TTL. The mapping is deterministic (it is a hash; it cannot go stale), so eviction
is purely about memory. The DNS TTL is separate; its default is 300s
and governs when apps re-query.
MTU in a world that cannot fragment
FIPS never fragments at the mesh or session layer. Every datagram has to fit in one transport packet, end to end. The adapter enforces this at the IPv6 boundary in three places:
- Effective IPv6 MTU. The TUN advertises
transport_mtu − 77to the kernel, so most applications pick a sensible size up front. - TCP MSS clamping. The adapter rewrites the MSS option on every SYN and SYN-ACK it sees, so TCP connections negotiate a segment size that already fits. Both directions are clamped, which avoids the initial oversized-packet loss that would happen with ICMP alone.
- ICMPv6 Packet Too Big. For any packet the app still sends too large, the adapter answers with an ICMPv6 Packet Too Big back through the TUN. That feeds standard kernel PMTUD without any network traversal. Rate-limited per source at 100ms.
Transports below 1357B
(LoRa at 256, serial, some BLE profiles) cannot carry IPv6 at all, because IPv6 mandates a 1280-byte
minimum link MTU. On those transports, apps have to use the native FIPS datagram API, or rely
on a transport driver that does its own internal reassembly.
Try it: see the MTU pipeline
Move the transport MTU slider and watch the effective IPv6 MTU, the clamped TCP MSS, and whether a sample packet fits. The presets cover the realistic deployment points: UDP over Ethernet, the IPv6 minimum, and a radio link that cannot do IPv6 at all.
- Effective IPv6 MTU
- 1395 B
- Clamped TCP MSS
- 1335 B
- IPv6 viable?
- yes
What fips0 advertises to the kernel.
Rewritten into SYN and SYN-ACK by the adapter.
Needs transport MTU ≥ 1357 B.
The gateway: mesh access without FIPS software
The adapter solves "how does SSH on this host reach a mesh peer". The gateway solves "how does SSH on a laptop that does not run FIPS reach the same peer through a home server that does".
fips-gateway is a separate binary that runs alongside the daemon. It
listens for DNS queries from the LAN, forwards .fips names to the daemon's
resolver, allocates a virtual IP from a configured pool (typically
fd01::/112), and installs nftables rules to NAT traffic between
virtual IPs and real mesh addresses.
1. client: dig ssh-box.fips → gateway DNS (port 53) 2. gateway → daemon resolver (localhost:5354) 3. daemon: resolves to fd00::beef, primes its identity cache 4. gateway: allocates fd01::5, installs DNAT/SNAT rules 5. client: ssh fd01::5 (i.e., the allocated virtual IP) 6. kernel DNAT: ip6 daddr fd01::5 → fd00::beef 7. kernel masquerade: ip6 saddr client_ip → gateway's fd00::cafe 8. packet flows out fips0 into the mesh 9. reply: kernel SNAT reverses on the way back
The masquerade rule on the postrouting chain is load-bearing. Without it, the LAN client's
source address (fd01::5, meaningful only at this gateway) would go
out on the wire; mesh peers would have no way to route replies back. Every mesh packet
originating at the gateway carries
the gateway's fd00 address, not the client's virtual IP.
A consequence worth naming: every LAN host sharing a gateway shares the gateway's FIPS identity to the rest of the mesh. Peers cannot tell one LAN client from another. That is privacy-preserving for the clients, but it also means the gateway's reputation covers all of them. The gateway is the correct trust boundary to reason about, not the individual laptop.
Pool lifecycle
Each virtual IP moves through four states. The short version: DNS query allocates, conntrack promotes to active, TTL elapses into draining, and the IP is reclaimed only once sessions clear and a grace period (default 60s) has passed. Pool exhaustion returns SERVFAIL rather than evicting live mappings. Correctness for ongoing sessions always beats new allocations.
ALLOCATED ──→ ACTIVE ──→ DRAINING ──→ FREE
│ ↑
└────────────────────────────────────┘
(TTL expired, no sessions ever)
That is the whole path from an unmodified ssh
invocation on a LAN laptop to an FSP session on the mesh: DNS query, virtual IP allocation, DNAT,
masquerade, TUN, FSP, mesh. Everything below the TUN the rest of this course has already covered.
IPv6 Gateway
1. An unmodified ssh client opens a connection to an fd00::/8 address. Why does the adapter need DNS to have happened first?
2. Why is the identity cache LRU only, with no TTL?
3. Given a transport MTU of 1472, what effective IPv6 MTU does the adapter expose?
4. Why does the adapter rewrite the MSS option on SYN and SYN-ACK?
5. fips-gateway allocates a virtual IP from fd01::/112 for each mesh destination. What does the masquerade rule in postrouting do?
6. What privacy property does the LAN gateway give you, and what is the cost?