See full diagram gallery for interactive versions
The homelab participates in a hub-and-spoke WireGuard mesh connecting a tech collective of peers. The cluster acts as the hub; peers connect as spokes.
| Hub endpoint | origin.k3s.strommen.systems:51821 (DDNS → WAN → UDM Pro port forward) |
| Port | UDP 51821 (forwarded via UDM Pro → 192.168.20.20) |
| VPN Subnet | 10.100.0.0/16 (hub = 10.100.0.1) |
| Hub node | k3s-server-1 (192.168.20.20) — wg1 interface via Ansible/systemd |
| Peer namespaces | bryce, jake, steve, ham |
| Exporter namespace | wireguard-exporter |
Note: Hamilton (
ham) namespace is a collective mesh peer — RBAC is inkubernetes/apps/ham/hamilton-rbac.yaml(not inkubernetes/apps/mesh-peers/).
| Peer | VPN Address | Namespace | RBAC location |
|---|---|---|---|
| Hamilton | 10.100.2.1/32 | ham |
kubernetes/apps/ham/hamilton-rbac.yaml |
| Bryce | 10.100.2.2/32 | bryce |
kubernetes/apps/mesh-peers/collective-mesh.yaml |
| Jake | 10.100.2.3/32 | jake |
kubernetes/apps/mesh-peers/collective-mesh.yaml |
| Steve | 10.100.2.4/32 | steve |
kubernetes/apps/mesh-peers/collective-mesh.yaml |
Split-tunnel routes pushed to each client:
192.168.20.0/24 — cluster nodes (server VLAN)10.42.0.0/16 — k3s pod CIDR10.43.0.0/16 — k3s service CIDR (CoreDNS at 10.43.0.10)10.100.0.0/16 — collective mesh supernetThe UDM Pro forwards UDP 51821 from WAN directly to k3s-server-1 at 192.168.20.20:51821:
WAN:51821/UDP → 192.168.20.20:51821 (wg1 on k3s-server-1)
Note: WireGuard uses UDP and bypasses Traefik (TCP-only for HTTP/HTTPS). The server runs as a host-level systemd service on k3s-server-1 via Ansible — it is not a Kubernetes pod or MetalLB service.
Port forward configured in: scripts/unifi-configure.py
A Prometheus exporter (wireguard-exporter namespace, pinned to k3s-server-1) scrapes WireGuard peer statistics using hostNetwork: true and NET_ADMIN capability to read wg show wg1:
mindflavor/prometheus-wireguard-exporter:3.6.6wg1Key metrics:
wireguard_peers_total — number of configured peerswireguard_sent_bytes_total — bytes sent per peerwireguard_received_bytes_total — bytes received per peerwireguard_latest_handshake_seconds — last handshake time per peer (staleness indicator)Manifest: kubernetes/apps/monitoring/wireguard-exporter.yaml
The exporter ServiceMonitor maps public keys to human-readable peer names via metricRelabelings so Grafana can display by name instead of public key.
Peer configurations are managed in the collective repo (separate from home_k3s_cluster). The collective-deployer RBAC role allows the collective automation to apply peer configs.
collective repo → collective-deployer ServiceAccount → applies peer WireGuard configs to per-peer namespaces
Each peer (Bryce, Jake, Steve, Hamilton) gets:
admin ClusterRole in-namespace)hamilton-cluster-reader ClusterRole (no secrets outside own namespace)pod-security.kubernetes.io/enforce: restricted — pods must comply with PSA restrictedansible-playbook playbooks/hamilton-wireguard.ymlNote: RBAC for each peer lives in their respective namespace. The
collective-deployerServiceAccount lives in thedefaultnamespace. Hamilton's RBAC specifically is inkubernetes/apps/ham/hamilton-rbac.yaml.
Each peer namespace has opt-in Prometheus scraping via a ServiceMonitor + PodMonitor pair. Peers deploy their apps in their namespace and opt in by adding labels — no central config changes needed.
Manifest: kubernetes/apps/mesh-peers/peer-observability.yaml
Each namespace gets two monitors:
| Resource | Kind | Scrapes |
|---|---|---|
bryce-autodiscover |
ServiceMonitor | Services with port named metrics |
bryce-pods |
PodMonitor | Pods with port named metrics |
jake-autodiscover |
ServiceMonitor | Services with port named metrics |
jake-pods |
PodMonitor | Pods with port named metrics |
steve-autodiscover |
ServiceMonitor | Services with port named metrics |
steve-pods |
PodMonitor | Pods with port named metrics |
ham-autodiscover |
ServiceMonitor | Services with port named metrics |
ham-pods |
PodMonitor | Pods with port named metrics |
To expose metrics from a peer namespace, add this label to the Service or Pod:
labels:
prometheus.io/scrape: "true"
And expose a port named metrics:
ports:
- name: metrics
containerPort: 9090
Scrape interval: 30s. All monitors carry release: prometheus label for Prometheus operator discovery and app.kubernetes.io/part-of: collective-mesh for grouping.
A dedicated Grafana dashboard (collective-mesh) shows:
Dashboard ConfigMap: kubernetes/apps/mesh-peers/grafana-dashboard-collective-mesh.yaml
# From ansible/ directory
ansible-playbook -i inventory/homelab playbooks/hamilton-wireguard.yml
Output: .wg-output/{hamilton,bryce,jake,steve}.conf — send to peer via Signal or other secure channel.
After running:
.conf file securelyrestricted — no privileged containers allowedAll manifests in kubernetes/apps/mesh-peers/:
| File | Purpose |
|---|---|
collective-mesh.yaml |
Namespaces + RBAC for bryce, jake, steve (Roles + ClusterRoleBindings) |
collective-deployer-rbac.yaml |
collective-deployer ServiceAccount + permissions for automated peer config application |
peer-observability.yaml |
ServiceMonitor + PodMonitor pairs for bryce, jake, steve, ham namespaces |
grafana-dashboard-collective-mesh.yaml |
Grafana dashboard ConfigMap |
Hamilton's specific RBAC: kubernetes/apps/ham/hamilton-rbac.yaml
WireGuard exporter: kubernetes/apps/monitoring/wireguard-exporter.yaml
Ansible provisioning: ansible/playbooks/hamilton-wireguard.yml