FigJam Diagram: NAS Exporter — Prometheus Metrics Pipeline (expires 2026-04-13)
Custom Python Prometheus exporter that monitors the Ugreen DXP4800 NAS (192.168.30.10). It checks NFS TCP reachability, mount accessibility, volume usage, NFS latency, and media library item counts. The exporter script is injected inline via a Kubernetes ConfigMap — no dedicated Docker image is required. Metrics are scraped by Prometheus every 60 seconds via a ServiceMonitor, and a pre-built Grafana dashboard (uid: nas-storage) provides full visibility.
| Property | Value |
|---|---|
| Namespace | media |
| Image | python:3.12-slim (inline script via ConfigMap) |
| Metrics port | 9355 |
| CPU request / limit | 10m / 100m |
| RAM request / limit | 64Mi / 128Mi |
| Security context | runAsUser: 10010 / runAsGroup: 10000 (svc-jellyfin UID) |
| NFS mount | 192.168.30.10:/volume1/media — ReadOnlyMany PV/PVC |
| Scrape interval | 60s via ServiceMonitor |
No credentials required — NFS mount uses IP-based access control on the NAS side. No Kubernetes Secrets needed.
| Metric | Labels | Description |
|---|---|---|
nas_up |
— | NFS TCP port 2049 reachability (0 = down, 1 = up) |
nas_nfs_mount_up |
— | NFS mount read accessibility (0 = down, 1 = up) |
nas_volume_total_bytes |
mount |
Total volume capacity in bytes |
nas_volume_used_bytes |
mount |
Used volume space in bytes |
nas_volume_free_bytes |
mount |
Free volume space in bytes |
nas_volume_usage_ratio |
mount |
Usage fraction (0.0–1.0) |
nas_nfs_latency_seconds |
— | Directory listing latency on the NFS mount |
nas_tcp_connect_latency_seconds |
— | TCP connect latency to NAS port 2049 |
nas_media_items_total |
media_type |
Top-level item count by type (movies, tv, music) |
nas_downloads_pending_items |
category |
Downloads staging count by category |
nas_scrape_duration_seconds |
— | Total exporter scrape duration |
nas_scrape_errors_total |
— | Cumulative scrape error count |
| Alert | Severity | For | Condition |
|---|---|---|---|
NASDown |
critical | 5m | NFS TCP port 2049 unreachable (nas_up == 0) |
NASNFSMountDown |
critical | 5m | NFS mount inaccessible (nas_nfs_mount_up == 0) |
NASVolumeCritical |
critical | 5m | Volume usage > 95% (nas_volume_usage_ratio > 0.95) |
NASVolumeWarning |
warning | 30m | Volume usage > 85% (nas_volume_usage_ratio > 0.85) |
NASHighNFSLatency |
warning | 10m | nas_nfs_latency_seconds > 2 |
NASDownloadsBacklog |
warning | 6h | Pending downloads > 10 (nas_downloads_pending_items > 10) |
NASExporterDown |
warning | 5m | Exporter pod is down (absent metrics) |
Dashboard UID: nas-storage
Provisioned as a ConfigMap in the monitoring namespace with label grafana_dashboard: '1' — Grafana picks it up automatically via the sidecar.
| Row | Panels |
|---|---|
| Health Status | NAS up/down, NFS mount up/down, NFS latency, TCP latency, pending downloads, scrape duration |
| Volume Usage | Gauge, space stats table, usage over time, free space trend |
| Media Library | Library size by type, growth over time, downloads staging |
| NFS Performance | Latency over time, exporter errors |
| File | Contents |
|---|---|
kubernetes/apps/nas-exporter/nas-exporter.yaml |
ConfigMap (exporter script), Deployment, Service, ServiceMonitor, PrometheusRule, PV, PVC |
kubernetes/apps/nas-exporter/grafana-dashboard.yaml |
Grafana dashboard ConfigMap (uid: nas-storage) in monitoring namespace |
No secrets required. Deploy directly:
kubectl apply -f kubernetes/apps/nas-exporter/nas-exporter.yaml
kubectl apply -f kubernetes/apps/nas-exporter/grafana-dashboard.yaml
Verify the pod is running and metrics are reachable:
kubectl get pods -n media -l app=nas-exporter
kubectl port-forward -n media svc/nas-exporter 9355:9355
curl http://localhost:9355/metrics | grep nas_up