FigJam Diagram: Plex Media Server — k3s Deployment (expires 2026-04-13)
Plex runs in the media namespace alongside Jellyfin, sharing the same read-only NFS media library. It is the primary media server for external/mobile clients (Plex apps on TV, phone, tablet).
| External URL | https://plex.k3s.strommen.systems |
| Internal URL | http://plex.media.svc.cluster.local:32400 |
| Auth | Built-in Plex account auth (no Authentik middleware — external clients need direct access) |
| Namespace | media |
| Property | Value |
|---|---|
| Image | plexinc/pms-docker:1.43.0.10492-121068a07 (official Docker Hub) |
| Replicas | 1 |
| Strategy | Recreate |
| Port | 32400 |
| UID/GID | 10018 / 10000 (svc-plex NAS account / media-services group) |
| Timezone | America/Chicago |
| Advertise IP | https://plex.k3s.strommen.systems:443 |
Plex prefers k3s-agent-4 (Intel UHD 630 GPU) for hardware transcoding but will fall back to any amd64 worker if the GPU node is unavailable:
nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution for gpu: intel-uhd-630tolerations: key gpu=true:NoSchedule — allows scheduling on the GPU node/dev/dri is absent on the scheduled node, Plex auto-detects and falls back to CPU transcoding| Setting | Value | Reason |
|---|---|---|
privileged: true |
yes | Required for /dev/dri DRM ioctls (VA-API) |
fsGroup |
10000 | media-services group — NFS group access |
supplementalGroups |
44, 109, 991 | video group + render groups (Debian 13 / legacy) |
Note:
runAsUsermust NOT be set. s6-overlay starts as root and drops toPLEX_UID/PLEX_GIDinternally. SettingrunAsUsercauses "Permission denied" ons6-setuidgid.
| Requests | Limits | |
|---|---|---|
| CPU | 500m | 4 cores |
| Memory | 1Gi | 6Gi |
High memory limit (6Gi) accommodates Plex metadata indexing during library scans and concurrent 4K transcode sessions with VA-API. Actual idle usage is ~500Mi.
| Volume | Type | Size | Mount | Notes |
|---|---|---|---|---|
plex-config |
Longhorn RWO | 10Gi | /config |
Plex database, metadata, plugins |
plex-nfs-pvc |
NFS ROX | 10Ti | /media |
Shared read-only with Jellyfin |
emptyDir |
ephemeral | 30Gi limit | /transcode |
Transcode scratch — not worth block storage |
/dev/dri |
hostPath | — | /dev/dri |
GPU device nodes (VA-API) |
TLS via cert-manager letsencrypt-prod ClusterIssuer. Certificate stored in plex-tls secret in media namespace.
dnsNames:
- plex.k3s.strommen.systems
| Secret | Keys | How to create |
|---|---|---|
plex-token |
token |
Plex auth token for API calls — obtain from Plex web UI account settings |
Credentials managed out-of-band — never committed to git.
PLEX_CLAIM: One-time claim token (
https://plex.tv/claim, expires in 4 minutes). Server is already claimed — this env var is commented out in the manifest. Only needed when bootstrapping a new Plex instance.
A CronJob (plex-library-scan) triggers a full library refresh every 6 hours via the internal Plex API:
Schedule: 0 */6 * * * (every 6 hours)
Image: curlimages/curl:8.12.1
Sections: 1, 2, 4 (Movies, TV Shows, Music — verify IDs match your library)
The job calls POST /library/sections/{id}/refresh?X-Plex-Token=<token> against the internal service URL. Uses plex-token secret for auth.
Refinement note: Verify section IDs 1, 2, 4 still match the current Plex library layout. Section IDs can shift when libraries are deleted/recreated.
maxUnavailable: 0 — kubectl drain will block until Plex is manually scaled down or deleted. This prevents accidental downtime during node maintenance.
To drain k3s-agent-4 with Plex running:
kubectl scale deployment plex -n media --replicas=0
kubectl drain k3s-agent-4 --ignore-daemonsets --delete-emptydir-data
# After maintenance:
kubectl uncordon k3s-agent-4
kubectl scale deployment plex -n media --replicas=1
Plex has no native Prometheus /metrics endpoint. Monitoring is via:
GET /identity on port 32400PlexDown alert fires when kube_deployment_status_replicas_available{deployment="plex"} == 0Known Gap: No Tautulli or exportarr-style exporter for Plex watch history / stream metrics. Jellyfin is preferred for detailed playback analytics.