FigJam Diagram: kube-utils — Internal Honeypot Detection (expires 2026-04-13)
A deception service deployed on the cluster LAN to detect unauthorized scanning or probing of internal infrastructure. It mimics node-exporter on port 9100 to attract scanners targeting Kubernetes worker nodes.
| Namespace | kube-utils |
| Type | LoadBalancer (MetalLB auto-assigned IP from 192.168.20.200–220) |
| Port | 9100 (mimics node-exporter) |
| Image | python:3.12-slim (inline script via ConfigMap) |
/metrics is recorded as a probe hit{"status": "ok"} and a fake Server: cluster-metrics/1.4.2 header to appear legitimate and encourage further exploration/metrics returns a Prometheus counter (honeypot_probe_total) with labels: path, method, src_ip, user_agentalert: InternalServiceProbed
expr: sum(honeypot_probe_total) > 0
for: 0m
severity: critical
Fires immediately (for: 0m) on the first probe hit. This is a critical alert — any access to this endpoint means an unauthorized host is scanning the cluster VLAN.
# See who probed
kubectl logs -n kube-utils -l app.kubernetes.io/name=metrics-collector --tail=50
# Query in Grafana (Loki)
{namespace="kube-utils"} |= "PROBE"
Log format:
PROBE ts=2026-04-05T04:00:00Z src=192.168.20.42 method=GET path='/api/v1/nodes' ua='kube-probe/1.29'
node-exporter, which runs on all k3s worker nodes. Scanners looking for Prometheus exporters will find this first.default breaks MetalLB assignment (known anti-pattern).nobody with runAsNonRoot: true.kubernetes/apps/kube-utils/kube-utils.yaml -- Namespace, ConfigMap (inline Python server),
Deployment, LoadBalancer Service,
ServiceMonitor, PrometheusRule