FigJam Diagram: Kubernetes Network Policies — Default-Deny + Allowlist (expires 2026-04-13)
Kubernetes NetworkPolicies enforce namespace-level traffic segmentation across the k3s cluster. Each protected namespace uses a default-deny-all base policy that blocks all ingress and egress by default, with explicit allowlist rules layered on top. This limits blast radius from a compromised workload — a pod in cardboard cannot initiate connections to pods in trade-bot or the monitoring stack without an explicit policy permitting it. Only namespaces with sensitive or externally-facing workloads have policies applied; general app namespaces are currently unpolicied and rely on cluster-level controls.
The default-deny pattern blocks all traffic, then layers explicit allowlists. Below is the standard flow for a protected namespace:
Namespace exceptions:
email-gatewayadditionally allows egress to AWS SES:587.home-assistantadditionally allows egress to192.168.1.0/24for LAN device integrations.monitoringaddsipBlockrules for node subnets (192.168.20.0/24,192.168.1.0/24) to reach Proxmox hosts and the NAS.
Every protected namespace receives two policy layers:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-all
namespace: <namespace>
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
podSelector: {} matches all pods in the namespace. Both Ingress and Egress are listed with no rules, which denies all traffic in both directions.
Explicit allow policies are added on top of the deny base. Most namespaces share a standard set of allow rules:
| Rule | Direction | Selector / CIDR |
|---|---|---|
| Traefik ingress | Ingress | kube-system / app.kubernetes.io/name: traefik |
| Prometheus metrics scrape | Ingress | monitoring / app.kubernetes.io/name: prometheus |
| Cluster dashboard health checks | Ingress | default / app: cluster-dashboard |
| Intra-namespace pod traffic | Ingress + Egress | ipBlock: 10.42.0.0/16 (pod CIDR) |
| DNS resolution | Egress | kube-system:53 + 10.43.0.0/16:53 (service CIDR) |
| External HTTPS | Egress | 0.0.0.0/0:443 |
Note on ipBlock vs podSelector for intra-namespace traffic: Intra-namespace allow rules use
ipBlock: 10.42.0.0/16rather thanpodSelector: {}. This is intentional — kube-router uses ipset-based podSelector matching, which has a sync lag: newly created Job pod IPs are not added to the source ipset fast enough. A static CIDR match is immediate and avoids dropped packets during pod startup. See gotcha section below.
| Namespace | Special Rules |
|---|---|
cardboard |
Standard pattern only |
trade-bot |
Standard pattern only |
dev-workspace |
Standard pattern only |
proxmox-watchdog |
Standard pattern only |
email-gateway |
+ Egress to AWS SES on :587 |
home-assistant |
+ Egress to LAN 192.168.1.0/24 for device integrations (lights, switches, sensors) |
monitoring |
Complex — adds ipBlock: 192.168.20.0/24 (node subnet) and 192.168.1.0/24 ingress/egress for Proxmox and NAS scrapes. Required because namespaceSelector: {} only matches pod CIDRs, not host-network IPs. |
open-webui |
+ Egress to external internet for Anthropic API, OpenRouter, and other AI provider endpoints |
openclaw-ops |
+ Egress to Kubernetes API server, GitHub, and Slack |
openclaw-personal |
Minimal egress — job boards and API calls only |
media |
Complex — Jellyfin NetworkPolicy (jellyfin-networkpolicy.yaml, Phase 5). Ingress: Traefik (kube-system), Prometheus (pod-network + node ipBlock 192.168.20.0/24), Jellyseerr (same namespace). Egress: PostgreSQL (:5432), Redis (:6379), CoreDNS (UDP/TCP :53), NAS NFS (ipBlock 192.168.30.0/24 :2049), Authentik (MetalLB IP 192.168.20.200/32 :443), external HTTPS (except private RFC1918). PostgreSQL + Redis ingress locked to Jellyfin server pod only. |
security-scanner |
+ Egress to cluster pod CIDR 10.42.0.0/16 for scanning workloads |
kube-router ipset sync lag — newly created Job pod IPs are not added to the kube-router ipset fast enough when using
podSelector-based rules. The pod starts, attempts an outbound connection, and the packet is dropped because the ipset hasn't been updated yet. UsingipBlock: 10.42.0.0/16(the full pod CIDR) bypasses ipset entirely — the kernel matches the static CIDR immediately. This applies to all intra-namespace allow rules in this cluster.
This pattern is safe because the pod CIDR is cluster-internal only. The tradeoff is that any pod in the cluster (not just the same namespace) technically matches the CIDR — but combined with the default-deny-all base and Traefik/Prometheus namespace selectors for ingress, the practical exposure is minimal.
The following namespaces do not have NetworkPolicies and use unrestricted pod networking:
hamaja-recipesdigital-signageThese namespaces are lower risk (no sensitive credentials, no external attack surface beyond Traefik) but should have policies added as the cluster matures. See the planning board for prioritization.
All NetworkPolicies are defined in a single manifest:
kubernetes/core/network-policies.yaml
Apply with:
kubectl apply -f kubernetes/core/network-policies.yaml
monitoring namespace selectors in policies