@astrid you could totally do edge nodes as a separate node group, control plane in different subnet should be fine so long as inbound access to api server is allowed. Depending on CNI you may need to allow kube-proxy or whatever through also, some need to access the kube api server via the internal service when bootstrapping
@astrid fair warning that stretch clusters are highly advised against. Etcd is strongly consistent and I hear even more than a whiff of latency causes big issues.
@astrid @arichtman
two nodes, chilling in a cluster, 5ms away because they're not gay
@astrid very true. My virtualized router is easily one of the most critical things. Second only to the hypervisor it's on I guess
1. inferno/Lucifer, the one hooked to the modem, it's a firewall
2. asmodeus, who will handle dn42 peering
3. charon, who will handle all packets entering my production services
4. nyaanet, the wifi router
Lucifer is opnsense and I don't like opnsense very much after almost a year of using it and I might just move the role into nyaanet.
@astrid whew that's complicated! What ticked you off about opnsense? Seems more libre than pfsense and not trash GUI like openwrt?
Also - if you really wanna have fun, you can use bgo peering for the worker nodes to advertise their ipv6 routes to your main router and skip virtual load balancing.