FigJam Diagram: Jupyter Notebook — Cluster Analytics (expires 2026-04-13)
Interactive Python notebook environment deployed in the jupyter namespace. Primarily used for cluster analytics — querying Prometheus metrics and Loki logs directly from within the cluster via DNS.
| Property | Value |
|---|---|
| URL | https://jupyter.k3s.internal.strommen.systems |
| Namespace | jupyter |
| Image | harbor.k3s.internal.strommen.systems/staging/jupyter:latest |
| Port | 8888 |
| Storage | 5Gi Longhorn PVC (jupyter-data) at /home/jovyan/work |
| Auth | Token-based (JUPYTER_TOKEN env var — see secrets below) |
| PSS | baseline (enforced via kubernetes/core/pod-security-standards.yaml) |
| Node affinity | Excluded from k3s-agent-4 (Longhorn disabled on pve4) |
Image policy violation: This service uses
:latesttag — a cluster policy violation. The image should be pinned to asha-<commit>tag. Low priority since Jupyter is a dev tool, but should be addressed when the image is next rebuilt.
From within a notebook, use cluster-internal DNS addresses:
import requests
# Prometheus — PromQL instant query
r = requests.get(
"http://prometheus-kube-prometheus-prometheus.monitoring.svc.cluster.local:9090/api/v1/query",
params={"query": "kube_pod_status_ready{condition='true'}"}
)
data = r.json()
# Loki — LogQL query range
r = requests.get(
"http://loki.monitoring.svc.cluster.local:3100/loki/api/v1/query_range",
params={"query": '{namespace="media"}', "limit": 100}
)
JUPYTER_TOKEN is loaded from the jupyter-token K8s secret (key: token). The notebook requires a token to access.
Bootstrap:
kubectl create secret generic jupyter-token -n jupyter \
--from-literal=token=$(openssl rand -hex 32)
Note: After creating the secret, restart the Jupyter pod (
kubectl rollout restart deployment/jupyter -n jupyter). Access the notebook at the URL printed in pod logs or use the token directly in the URL:https://jupyter.k3s.internal.strommen.systems/?token=<token>.
| Resource | Request | Limit |
|---|---|---|
| CPU | 100m | 2 |
| Memory | 512Mi | 2Gi |
The 2-CPU limit is intentional — notebooks doing heavy Pandas/NumPy operations should not saturate a worker node.
The image is built from kubernetes/apps/jupyter/Dockerfile and pushed to Harbor staging. It extends the standard Jupyter base with any additional packages needed for cluster analytics.
cd kubernetes/apps/jupyter
docker buildx build --platform linux/amd64 --provenance=false \
-t harbor.k3s.internal.strommen.systems/staging/jupyter:latest \
--push .
After pushing, restart the Deployment to pull the new image:
kubectl rollout restart deployment/jupyter -n jupyter
Notebooks are saved to /home/jovyan/work which is backed by a 5Gi Longhorn PVC. The PVC uses ReadWriteOnce — only one notebook pod can mount it at a time (matches the Recreate deployment strategy).
No backup CronJob — the
jupyternamespace has no pg_dump or file backup job. Notebooks stored in/home/jovyan/workwill be lost if the PVC is deleted. Back up important notebooks to the Git repo or NAS manually.
Missing
/metricsendpoint — The Jupyter pod does not expose a/metricsendpoint for Prometheus. This violates the cluster rule that every deployed service must expose metrics. A future improvement would be to add jupyter-resource-usage or a custom Prometheus metrics sidecar.