Since 2009 I have used IPv6 and IPv4 dual stack on my home net, and the same at work since 2014. My ISP, Verizon (now Frontier), does not support IPv6 natively (in 2019). Therefore I use the Hurricane Electric tunnel broker service, This has worked out well and reliably for a decade.
However, an extremely annoying trend is building up in the commercial world. Certain web services are geographically restricted, such as access to streaming video of sports events, and the providers detect if the client's IP address is within the region they are licensed to serve. Tech-savvy clients whose access is on unfavorable terms, e.g. not free, have learned to subscribe to a VPN or tunneling service whose egress server is more favorably located. Hurricane Electric has the advantage that it is free and has a variety of points of physical presence.
The providers are aware of the VPN and tunnel services and have taken to
blocking traffic from them. Apparently this kind of blocking has become a
common feature
of content distribution networks even if the content
is not actually geographically licensed, e.g. advertisements. I initially
could not pay my bill on T-Mobile
unless I suppressed IPv6 in my browser. The
Raspberry Pi support site also
is IPv4-only (for me).
Now it has gotten so bad that I cannot use IPv6 to read
CNN
because of long delays loading the advertisements. The VPS Showdown
article referred to below is available (to me) only on IPv4.
So what am I going to do about this?
The goal is to be able to do normal Internet activities, particularly browsing the web, without degradataion of the user experience by sites which advertise an AAAA record but which block my connections to it or otherwise have botched their IPv6 configuration.
See RFC 8305 Happy
Eyeballs Version 2: Better Connectivity Using Concurrency
. The
recommendation is that when the peer has both A and AAAA records, the client
should initiate a connection to the preferred one, normally IPv6, but if it
has not responded within a short configurable time, typically 100ms to 300ms,
the client should try the other address. Whichever ultimately responds first
is used and the other is abandoned. The successful choice should be cached.
Happy Eyeballs is on by default in Mozilla Firefox-64.0 and a lot earlier, and also in Google's Chromium, and likely in all well-maintained browsers. In Firefox it is governed by settings key network.http.fast-fallback-to-IPv4 , which is true by default. So why are my eyeballs unhappy?
At least for T-Mobile, which uses Incapsula as their content distribution and DDoS resistance service, the client makes a complete TLS connection to their reverse proxy site (eyeballs are happy so far), which then labels the connection as fraudulent, and closes it without a reset, so the client hangs until timing out. Hiss, boo, very unhappy eyeballs.
Here is a list of mitigation plans:
My cloud server will need these networking capabilities. Except as noted, everything is dual stack, IPv6 and IPv4 operating in parallel. The proposed design is described in the present tense even though not yet accomplished (as this paragraph is written).
These issues are seen about network names.
informaland is never seen on the global internet. DNS is provided by servers on the internal LAN, which are used by internal hosts travelling on the wild side, over their own tunnels.
What hosting provider has the services I want at a reasonable price?
VPS Showdown series by Josh Sherman. He sets up virtual machines on selected low cost providers and compares the results. This is re-done periodically. For the January 2019 showdown he compares these services:
Their cheap offerings at $5/month get you:
1Gb RAM, 1 core (specs below), 25Gb disc (SSD), and 1Tb/month data transfer, except Lightsail offers 40Gb disc and 2Tb/mo transfer.
CPU basic capabilites: 2.25 to 2.50 GHz; 16 to 30 Mb cache; 3059 to 3333 BogoMips. He says, when evaluating VMs you should create several instances and get a feel for how much they vary.
Time per event: jimc is not sure what an event is, but I assume it has to do with the internal resources used to serve a simple web page. Average speeds range from 1.05ms to 1.47ms, but the maximum ranges up to 14.78ms, attributed to possible neighbors hogging the CPU. For him, DigitalOcean has the fastest average speed.
Memory reading and writing: the benchmark did 8.1e5 to 3.8e6
operations
per second.
File reading: 1094 to 2159 operations per second; writing: 729 to 1439 operations per second.
MySQL 670 to 3055 transactions per second.
Network speed tests: client was 2.09e6 to 2.45e6 meters from the server as the vulture flies. Latency was around 45ms; download 1.69e8 bit/sec to 1.28e9 bit/sec (big range there); upload 2.96e8 to 5.66e8 bit/sec.
Josh's conclusion: DigitalOcean had the lead in most metrics, and Lightsail gives you more storage and net data allowance. But ease of use and user experience are not reflected in the quantitative metrics. Please use the signup links in the article so he can get a credit.
Customer or professional reviews of various hosting companies:
DigitalOcean on Digital.com: Inception 2011. They are currently the 3rd largest web host by count of servers (VM's?). Payment is hourly with a monthly maximum. Inbound data is free, outbound counts toward your limit. Their products include shared hosting (a virtual host on their webserver) and virtual private server (VPS). They have 8 datacenters, nearest is Silicon Valley.
Amazon Lightsail on TechRadar, review by Mike Williams (2018-08-28). Lightsail is based on AWS (Amazon Web Services). It really is a VPS, accessible by SSH. Up to 5 static IPs are free. Tech support costs a lot extra; you need to be able to handle the sysadmin issues. Datacenters are everywhere and uncountably many.
Linode on WhoIsHostingThis: Review by Claire Broadley, updated 2018-12-28. Inception 2003. They only (?) do VPS. 8 datacenters including California (Fremont, from Hurricane Electric). Root access to the VM. They will do backups at extra cost. Glowing customer reviews (selected?) They do have a reputation for good customer support, although you are expected to handle sysadmin stuff; they won't hold your hand unless you pay extra.
Vultr on HostAdvice.com. These are customer reviews. Their reviewers are about equally balanced between wonderful and awful. For the latter, there are the usual trolls who have no idea how to interact with a business to make it work for them, but there are a lot more than usual complaints about Vultr's customer service. Vultr has 15 datacenters including Los Angeles. One fixed IP included. Vultr has been operating since 2014.
Jimc's conclusion: The four vendors that I focused on have fairly similar product offerings at a relatively good price. As Josh Sherman pointed out, non-quantitative factors are going to drive the decision. Vultr customer reviews give me a not so comfortable feeling. DigitalOcean is a new entry in my experience. Amazon Lightsail: I have a lot of confidence in Amazon, and they are currently among the White Hats politically, but cozying up to the big dragon makes me rather nervous. I've known about Linode for a long time and I get a positive feeling about them from buzz on the web. I came into this project inclined to put my cloud server on Linode, and I'm sticking with this choice.
Here are some more details about the Linode VPS service, as of 2019-02-02.
$5/mo or $0.0075/hr gets you 1Gb RAM, 1 core, 25Gb disc, 1Tb data (presumed to be per month), 40Gb/s data rate incoming, 1Gb/s outgoing. You can multiply these in powers of 2, sort of, including the cost; see the pricing page for actual ratios.
You are billed at the end of each month. You pay by the hour up to the listed price per month. You pay for the existence of the machine; you still pay if it's powered off; when you're done with your Linode, delete it. You can increase or decrease resources at any time with no hassle, or add additional machines. Inbound traffic is free. If you go over your outbound quota, you pay $0.02 per Gb. ($5 for 1Tb = $0.005/Gb.) Payment by credit card, PayPal, or check (contact customer service first). You can prepay. Cancel in the first 7 days and get a full refund. If you read their Getting Started document they will give you a promo code for a small credit on a new account.
An additional fixed IP costs $1/mo but you need to give tech support a technical justification for it, due to exhaustion of IPv4 address space. On the other hand, you can ask for an IPv6 subnet for free.
Create your account at Linode.com.
Creating the profile and paying: either you have to log in first, or a cookie times out while you read the account agreement, privacy policy, and EU Model Contract (if you're in the EU). Or both. I had to do this step over.
These items are needed for your profile:
getting startedarticle, they give you a promo code for a small credit toward your bill.
Key points from the Customer Agreement (IANAL):
Creating your Linode(s):
Add a Linode(right side below list of existing Linodes if any).
The next page is the Dashboard summary page. To get to it, in the
list of your Linodes, click on the name field or Dashboard
of the line item for your Linode.
Brand New(refresh the page to see it)
Rather than figuring out everything by myself, I'm going to follow along in the Getting Started document (Documentation tab). There's also a Beginner's Guide.
Initial setup:
Is the Linode going to solve my problem? I will test https://www.t-mobile.com/ and https://www.raspberrypi.org/ . I'm using w3m (text only) as the browser.
Positive control on my laptop on the internal net. With w3m, T-Mobile delivers the front page on both IPv4 and IPv6. With Firefox, T-Mobile delivers on IPv4 but hangs on IPv6 -- it must be that some page elements are blocked, like a style sheet or an image that w3m would not download, but the main page isn't blocked. RPi delivers on IPv4 but hangs on IPv6.
Success! Executing on the Linode with w3m, and with both IPv4 and IPv6, https://www.raspberrypi.org/ delivers its front page. No delays, no hassle.
Too bad I don't have the tunnel yet; running Firefox on a remote host is always a pain in the butt, but I'm going to install it on the Linode and try T-Mobile.
Conclusion: I'm on the right track, and I will continue to come up with a working tunnel.
Here's an overview of how the network is going to go.
My firewall relies on eth0 being on the trusted
net, and
rather than making this conditional per host, I'm going to use a udev
rule to rename eth0 to eth1, which the firewall knows is on the wild
side. [Update: I did a lot of work improving the firewall, including
recognizing Surya's eth0 as not trusted.]
For the tunnel I think that IPSec is more efficient of CPU time, politically correct and technologically cool. However, it has two disadvantages: First it isn't really a tunnel; the payloads are considered to have arrived from the interface that the ciphertext arrived on. This can be worked around, but it's going to be quicker and more reliable (designwise) to use OpenVPN, whose payloads emerge from a tun/tap device (tun0). Second, OpenVPN is either on or off, and if off, no data can be transferred. With IPSec, if the Security Association goes away the payloads can still be transferred without protection. I'm sure that the firewall can recognize tunnelled packets reliably and can toss any that lack a security policy. But when setting this up initially, I'm going to keep it simple and use OpenVPN, and I'll investigate IPSec later.
The gateway (Jacinth) will initiate a connection to Surya, and will reconnect automatically after disruptions such as a changed dynamic address.
The tunnel endpoint on Surya will have local LAN IP addresses (IPv4 and IPv6) specific for Surya.
Jacinth now has OpenVPN responders on ports 1194/udp and 443/tcp. These will migrate to Surya. OpenVPN hands off HTTP on port 443 to Claude, the webserver (now and in future). With luck, the responder on 1194/udp can handle the tunnel from Jacinth, rather than needing a separate instance. [Update: it did need the separate instance, on a nonstandard port, 4294.]
www.jfcarter.net:80 (IPv4) will get DNAT to Claude. www.jfcarter.net's AAAA record will point directly to Claude.
Packets coming into Surya that are addressed (possibly after DNAT) to internal net addresses will be routed down the tunnel that Jacinth initiated.
The OOBA service opens a firewall hole for a specific IP address. (Authentication required.) This will migrate from Jacinth to Surya. I may or may not continue to run OOBA on Jacinth: it would be useful only if Surya became inoperative and if I were diagnosing or repairing it from the wild side and if I knew the current wild side address of Jacinth, each of which events are rare.
Secure the machine. From Securing Your Server:
ssto list all sockets. It is like netstat but better. Command line: ss --listening --processes --tcp --udp) [sshd is the only service and I require it.]
Securing the manager account, from Linode Manager Security Controls:
scratch code(one-time password) in case of broken two-factor authentication.
Make Surya a part of my net: add its various addresses to /etc/hosts and directory server files (DNS etc). [Done]
It looks like they're using RFC 4862 addresses for everything in their datacenter, for example lish-fremont.linode.com has IPv6 address 2600:3c01::f03c:91ff:fe93:e32e. So I can't just pick arbitrary IPv6 addresses. I can ask tech support for allocations, size /116, /64 or /56. [Asked; they gave me a /64.]
Add Surya's private and public IPv6 addresses.
yast2 lan. Infrastructure for yast2 GUI is not installed, using ncurses. Add the addresses on the added subnet as well as the RFC 4862 address (with EUI-64) on the main subnet.
Add Surya to dyn.com DNS. DNS will get (temporarily) the same assignments as are in /etc/hosts, except replacing the realm with jfcarter.net. This will have to be done over when the tunnel is working and I want the addresses to point to e.g. www.jfcarter.net. [Done]
Activate Linode's reverse DNS (afterward). On Dashboard-Remote Access
tab, find the Reverse DNS link. Fill in the referent of the PTR record,
e.g. surya.jfcarter.net, and hit Look Up. In the resulting form, hit
Yes
to make the address that it found point to that host. This will
have to be done over when the tunnel is working and I want the addresses to
point to e.g. www.jfcarter.net. [Done]
Learn to use Lish. On the remote access page, you can launch these items:
Lish can do more than console access. You can give it a subset
or superset of virsh commands such as shutdown or reboot; boot
is the equivalent of start
.
Running service daemons. Which of these should we not have?
About haveged: This is required on some old AMD CPUs, but otherwise rng-tools has access to the hardware random number generator and so is better. I tested rng-tools (/usr/sbin/rngd), successfully. During installation, haveged will be replaced by rng-tools. A peculiarity: on the main net machines I hardly ever see FIPS test failures, but on the Linode on 3 test runs (250 trials each) I got 4 failed trials, various patterns failed. On truly random data a few failures are inevitable, and this is well under the action level which I have set at 6 out of 250, but even so it's strange and I should keep an eye on it.
Either create a CouchNet configuration compatible with SuSE 15.0 and put it on Surya, or up-down-sideways-grade the existing Leap 15.0 into Tumbleweed and put my standard configuration on it.
I'm thinking of reverting from Tumbleweed back to a non-rolling distro, currently Leap 15.0, but I've decided that it's a jackass choice to entangle that project with the tunnel issue, so I'm going to sideways-grade Leap 15.0 to Tumbleweed. The key item is going to be installing the distro definitions: I should download direct from SuSE, not use CouchNet's enterprise mirror (a Squid proxy). Definitely not until the tunnel is operational, and maybe not even after that. This means jiggering audit-repos to know about Surya and omit the proxy parameter for it. [Done]
I side-graded Linode's SuSE Leap 15.0 image image into Tumbleweed, but
I got tangled up with the disc format and the bootloader, so this failed.
So I repartitioned the disc and did a bare
metal install on the virtual
disc, successfully.
Just do the upgrade, famous last words.
Rebuilding the machine, this time trying to avoid problems with the bootloader.
Tumbleweed from CouchNet
service ssh start(not sshd). Also you can install (Debian) packages on Finnix, ephemerally.
Installing Tumbleweed on Surya (second try):
service start ssh(not sshd). Actually working through lish is working out and sshd is probably not needed for this.
rebootcommand.
setreveals prefix=(hd0)/boot/grub (nonexistent) and root=hd0.
ls(and ls of various partitions and directories) reveals that the ISO image is on hd0 and the target disc (/dev/sda) is on hd1. (hd0,msdos1) is an EFI partition (ignore) and (hd0,msdos2) has the payload. (hd0,msdos2)/boot/x86_64/loader/linux is the kernel and similarly for initrd. (Plus many translations and icons in that dir.) The command line must be somewhere, but I can't find it.
Install local software needed to activate the firewall.
Firewall fixes to make it work on Surya. I've had this firewall since 2003 (with lots of modifications since then), and there were earlier incarnations. Firewall details are described in a separate section.
Port scan again, are we locked up tight? Yes. In fact, I revamped the firewall tester and now it's much more complete and aggressive. See separate section.
Backdoor SSH server on nonstandard port. It turns out that we don't really need this, nor am I able to make effective use of it when the networking is hosed, so I'm taking it out.
Investigate package mtr, a hopped up traceroute.
Create the tunnel from Jacinth to Surya. See separate section.
Etc.
I have three VPN variants:
IPSec: This is supported in the kernel and is probably the better
solution on technical grounds. But it has a feature
that is not
so helpful in my situation: it is not a tunnel. In IPv6, for which it
was designed, the packet has an extra header specifying the Security
Association whose session key is used for encryption, but the packet is
otherwise a normal IPv6 packet. This means that without native IPv6
transport (my case), you need to rely on some kind of tunnel, like
OpenVPN. If the tunnel lacks encryption then IPSec is useful, but
IPSec is redundant on an encrypted tunnel.
IPv4 packets are sent out using the ESP protocol, but once again they are not really tunneled. Getting IPSec to work on IPv4 is not going to be fun, given the complex tunnel structure that I have, and it will not be worth the effort given that I absolutely require the separate tunnel to handle IPv6. Therefore I am giving up on IPSec.
OpenVPN (UDP on port 1194): This is the preferred way to run OpenVPN, and I will use it whenever feasible.
OpenVPN (TCP on port 443): Running TCP payloads in a TCP bearer
channel is not a good idea, because if the packet loss rate gets over
a limit (I think it's 50%), both layers are going to be doing TCP
retransmissions, and the rate of packet transmission diverges to
infinity: the dreaded TCP Meltdown
syndrome. Nonetheless, some
public Wi-Fi services restrict service to a very small set of ports
like 80 and 443. The Secret Police of repressive regimes may be able
to recognize traffic on 443 that is not typical of (encrypted) web
traffic, but a weaselly hotel net with a cheap router certainly can't,
so OpenVPN on 443/TCP lets me get onto my network when travelling.
During this project it often happened that I made some improvement
to the networking and was cut off from Surya, Jacinth or both. When all else
fails I can connect to Surya via LISH (see above for the procedure), and to
Jacinth's physical console (if I'm at home). However, for normal SSH access
with X-Windows, I get good results using this procedure:
First connect to
https://surya.jfcarter.net:1443/~ooba/ooba.cgi .
Authorize your client's IPv4 address. Then ssh -4 surya.jfcarter.net in the
normal way. You force IPv4 because there are more moving parts
to
your IPv6 connection, i.e. tunneling between the client and your tunnel
broker and/or Surya or Jacinth itself, and the network screwups and
repairs to it are more likely to mess up IPv6 than IPv4.
Use the same procedure except connect to jfcarter.net, which is maintained with an 'A' record to Jacinth's current DHCP address from the carrier.
It turned out to be important, for sane network design, to be consistent about matching up the names and IP addresses of Surya, Jacinth, and Jacinth's various alter egos. In this table 47.156.151.100 is given as Jacinth's wild IPv4 address, but this is from DHCP and changes at random intervals. It is updated on the DNS provider by ddclient.
Domain Name | IPv4 Address | IPv6 Address | Description |
---|---|---|---|
jacinth.jfcarter.net | 47.156.151.100 | 2001:470:1f04:844::2 | IP addresses on wild side |
surya.jfcarter.net | 50.116.6.129 | 2600:3c01::f03c:91ff:fea9:875e | IP addresses on wild side |
jfcarter.net | 47.156.151.100 | 2600:3c01::f03c:91ff:fea9:875e | |
A hybrid: Jacinth wild for IPv4 but Surya wild for IPv6. | |||
www.jfcarter.net | CNAME to jfcarter.net | ||
www.cft.ca.us | 50.116.6.129 | 2600:3c01:e000:306::c1 | |
A hybrid: Surya wild for IPv4 but Jacinth internal for IPv6. | |||
jacinth.cft.ca.us | 192.9.200.193 | 2600:3c01:e000:306::c1 | IP addresses on internal LAN |
surya.cft.ca.us | 192.9.200.185 | 2600:3c01:e000:306::8:1 | IP addresses on internal LAN |
jacinthhe.cft.ca.us | (none) | 2001:470:1f04:844::1 | Hurricane Electric, their end |
jacinthhe3.cft.ca.us | 2001:470:1f05:844::3 | Deprecated internal LAN IP, to be removed |
The two hybrids (www.jfcarter.net and www.cft.ca.us) are intended to refer to the master site and its numerous services. www.jfcarter.net IPv6 points to Surya because the IPv6 default route comes out there, and so replies to queries need to have that address; however, there is no benefit from adding complexity by running IPv4 in and out of Surya. www.cft.ca.us is used by internal hosts, and since IPv6 is preferred (on my net), internal hosts will take the most reliable and most efficient route to Jacinth. But if roaming, the host (Xena) can fall back to IPv4 on Surya, which has the advantage of being fixed.
CouchNet is not a big enterprise net, but it has some details that add to the complication of setting up these tunnels. Most of the machines are on one internal LAN which uses IPv4 and IPv6 (dual stack). Each permanent machine has assigned fixed IP addresses (IPv4+6); a local convention is that the IPv6 address is $prefix::$octet where the last part is the last octet of its IPv4 address, converted to hex. For example, 192.168.10.20 would become $prefix::14 (20 decimal = 14 hex).
The LAN address range for IPv4 could be from RFC 1918, but I'm actually using an equivalent non-routeable /24 range that has been forgotten by everyone but me. The main router (Jacinth or Surya) uses NAT to condense all the internal hosts to one wild-side IPv4 address: DHCP on Jacinth and fixed on Linode.
The IPv6 prefix(es) are assigned by my tunnel broker(s); they are /64 bits. As detailed at the beginning of this document, I have been using Hurricane Electric's service for over a decade, but I'm now switching to Linode.com. Mostly, NAT is not used; authorized wild-side hosts communicate directly with the internal hosts. This is impossible on IPv4.
Several of the machines are host to virtual machines.
The virtual machines all use bridge networking: the host's own IPs are
on a bridge device, and the virtual machine's vnet0 is a member of the bridge.
On Jacinth and Diamond the wired connection to the internal LAN (eth0) is also
a bridge member. So both the host and the guest are directly
on the
internal LAN. Xena is similar except no eth0; it's a laptop.
Jacinth's wild side for IPv4 is via eth1, which does fairly conventional routing and NAT: traffic to the wild side, if sent via eth1 rather than Linode, gets the NAT treatment on Jacinth. In the old configuration Jacinth's IPv6 gets out via a SIT tunnel to Hurricane Electric; the bearer packets (IPv4) go in and out on eth1. In the new configuration IPv6 will be forwarded through an OpenVPN tunnel to Surya, which has native IPv6 service from Linode. For fallback, both of these need to be operating at the same time.
Xena is a laptop and its networking is on Wi-Fi. It particularly needs to function when roaming, i.e. when it's connected to an arbitrary foreign Wi-Fi net. I would very much like to put wlan0 into the bridge. But there are issues with Wi-Fi which preclude bridge membership, and I have tried and failed to implement various mitigations which I'm not going to discuss here. Instead, wlan0 is treated as a wild-side interface, which is no lie when Xena is roaming, and Xena routes between it and the bridge, very similar to what's done on Jacinth.
Surya routes IPv4+6 traffic to and from the internal LAN via an OpenVPN
tunnel, called the segment tunnel
. It has a dedicated interface, tun9,
on both Surya and Jacinth. Our fixed IPv4 and IPv6 addresses are on eth0, and
Linode routes the assigned subnet via the fixed IPv6 address. IPv4 traffic to
the wild side gets NAT on Surya (not Jacinth). IPv6 traffic destined for the
wild side is routed (without NAT) via Surya's eth0.
For NTP (time service), Jacinth is the master; it gets time from a carefully selected set of NTP.org pool servers (rather than the generic [0123].us.pool.ntp.org). The other machines, specifically Xena, use Jacinth and two generic pool servers. I'm running chrony and it doesn't take strata too seriously: it will prefer Jacinth over the pool servers because the quality of its time is substantially better, being on the same LAN (except Xena and Surya).
Three hosts provide directory services: Jacinth (master), Diamond (fallback), and Xena (so it can function autonomously when roaming). The directory services are DNS, LDAP and Kerberos authentication. Presently, Surya does DNAT from the wild side (jfcarter.net) for these and several other services, redirecting them to Jacinth. The other DNATted services are IMAP (user mailboxes), XMPP (chat by text, voice, video), Icecast (streaming audio), RoundCube (webmail), OwnCloud (data storage and PIM), and NTP (time service).
Why am I going to the trouble and complication to do DNAT on Surya for these services? The original idea was, a roaming host (Xena, cellphones, and possibly others in the future) would specify jfcarter.net as the server for these services, which formerly was Jacinth but now is Surya. The server daemons run on Jacinth, which formerly was exposed directly on the wild side, but now Surya acts as a bastion host and DNATs services through the DMZ to Jacinth. Roaming clients would use OOBA to get authorized to talk to these services on jfcarter.net; actually the local LAN clients can talk to jfcarter.net also, which formerly would land the connection on Jacinth, as desired.
First, the local LAN (non-roaming) clients need to be configured to talk to jacinth on the internal LAN. If their traffic flies 560km north to Surya, gets DNATted, and flies 560km back to Jacinth, this is awfully stupid and awfully vulnerable to failures, even though it does work. www.cft.ca.us has been provided for this purpose.
Second, I'm wondering if it's worth the effort to make the services available on jfcarter.net. The alternative is to require the client to open a VPN tunnel, and then to access them directly on Jacinth. Comparing and contrasting:
I have tried it both ways. My final design is to provide both VPNs and DNAT plus OOBA on Surya, both of which work. And internal clients go direct to Jacinth on the internal LAN.
Web service is a little more complicated, since CouchNet has two webservers. Actually each host has a webserver, but Jacinth and Claude are the ones most frequently accessed. Claude (virtual machine on Jacinth) serves material intended for the general public, like this writeup, while various virtual web sites on Jacinth serve home automation, webmail, administrative reports, and so on. Port 80 on Jacinth and Surya, if the source address is on the wild side, is DNATted to Claude. Port 443 is dedicated to OpenVPN, but OpenVPN is now smart enough to DNAT HTTP to the webserver and port of your choice: Claude in my case. Port 80, if the source address is not wild, is served by the host's own webserver (Jacinth or Surya). Hosts that want HTTPS from Jacinth itself (not Claude) need to connect to a nonstandard port. Surya has no useful services over HTTPS, but if it did, it would use a nonstandard port.
CouchNet has several subnets packed within its /24 or /64 prefix, precluding RFC 4862 autoconfiguration for IPv6 (i.e. appending the EUI-64, the MAC address with leading bit 02 inverted and xxff:fexx squeezed in the middle). The subnets are:
IPv4 | IPv6 | Description |
---|---|---|
…128/29 | ::1:0/112 | OpenVPN 1194/udp on Jacinth |
…136/29 | ::2:0/112 | OpenVPN 1194/udp on Surya |
…144/29 | ::3:0/112 | OpenVPN 443/tcp on Jacinth |
…152/29 | ::4:0/112 | OpenVPN 443/tcp on Surya |
…160/29 | ::6:0/112 | StrongSwan (IPSec) on Jacinth |
…176/29 | ::7:0/112 | StrongSwan (IPSec) on Surya |
…168/29 | ::5:0/112 | Xena and its VM |
…184/29 | ::8:0/112 | Segment tunnel: Surya and the other endpoint on Jacinth |
…192/26 | ::/112 | Main CouchNet |
…240/28 | ::f000/124 | DHCP addresses on main CouchNet |
…0/25 | (various) | Future expansion |
As for routes, for both families except as noted:
At present, Jacinth statically routes Xena's subnet to Xena's IP address on Wi-Fi (xenawild). Xena also sends IPv6 Router Advertisements which should create a similar route, when Xena is running and is at home, but this has been unreliable and needs to be debugged. Also when Xena is roaming and is connected to one of the VPN services on Jacinth, I would like its Router Advertisement to be honored, to attract its traffic to the VPN, but this is not yet happening.
Jacinth routes to Surya through the segment tunnel all of these subnets:
During testing, Jacinth's default route is through the Hurricane Electric tunnel. This route should be maintained after I go into production, with metrics arranged so the route to Surya is used if available, but if the segment tunnel goes down, both the tunnel device and all routes through it will vanish, and the Hurricane Electric route will take over seamlessly.
Surya routes to Jacinth through the segment tunnel all of these subnets:
These routes are hardwired in Surya's OpenVPN conf file for the segment tunnel. Surya's OpenVPN has an iroute to send this traffic to the remote end (Jacinth). Without the iroute, the packets are stuffed down the endpoint (tun9) and it immediately delivers them back to Surya; they echo until their TTL or hoplimit expires.
When Xena is connected to one of the VPN services on Surya, I would like its Router Advertisement to be honored, to attract its traffic to the VPN, but this is not yet happening. Getting this to propagate to Jacinth is going to be hard.
Jacinth statically routes through the tunnel traffic to the endpoint on Surya, and Surya does the same to Jacinth. These routes are set up automatically in the segment tunnel's OpenVPN conf file on Surya. They do not need an iroute, or OpenVPN creates it automatically.