FigJam Diagram: Tdarr — Distributed GPU Transcoding (expires 2026-04-13)
Tdarr is the transcoding orchestration layer in the media stack. It scans the NAS media library, identifies files that need re-encoding, and distributes transcoding jobs to GPU-accelerated worker nodes. Workers use Intel Quick Sync Video (QSV) via VA-API.
mediaghcr.io/haveagitgat/tdarr:2.58.02ghcr.io/haveagitgat/tdarr_node:2.58.02Note: No libraries or auto-transcode flows are configured by default. After deployment, configure libraries and transcode rules via the web UI.
| Resource | Kind | Details |
|---|---|---|
tdarr-server-config |
PVC (Longhorn) | 5Gi — Tdarr server DB and config |
tdarr-nfs-pvc |
PVC (NFS) | Full media library on NAS (server + workers) |
tdarr-server |
Deployment | 1 replica, Recreate strategy |
tdarr-worker |
DaemonSet | One worker per GPU node |
tdarr-server |
Service | :8265 (web), :8266 (worker communication) |
tdarr |
IngressRoute | Traefik, Authentik forwardAuth |
tdarr-tls |
Certificate | Let's Encrypt, tdarr.k3s.strommen.systems |
| Setting | Value |
|---|---|
| Image | ghcr.io/haveagitgat/tdarr:2.58.02 |
| PUID | 10016 (svc-tdarr on NAS) |
| PGID | 10000 (media-services group) |
| Web UI port | 8265 |
| Worker comm port | 8266 |
| CPU request/limit | 250m / 2 |
| Memory request/limit | 512Mi / 2Gi |
| Temp buffer | emptyDir 20Gi |
| Setting | Value |
|---|---|
| Image | ghcr.io/haveagitgat/tdarr_node:2.58.02 |
| PUID | 10016 (svc-tdarr on NAS) |
| PGID | 10000 (media-services group) |
| Server connection | tdarr-server:8266 |
| Security | privileged: true (required for VA-API DRM ioctls) |
| Supplemental groups | 44 (video), 109 (legacy render), 991 (render — Debian 13) |
| CPU request/limit | 1 / 4 |
| Memory request/limit | 1Gi / 4Gi |
| Temp buffer | emptyDir 50Gi |
Both containers run as root via s6-overlay and drop to PUID/PGID internally —
runAsUseris not set in the pod spec.
Workers are deployed as a DaemonSet that schedules only on GPU nodes:
nodeSelector:
gpu: intel-uhd-630
tolerations:
- key: gpu
operator: Equal
value: "true"
effect: NoSchedule
Current GPU nodes: k3s-agent-4 (Intel UHD 630 via VFIO passthrough on pve4)
The server uses podAntiAffinity to avoid co-scheduling with workers, keeping GPU node resources available for transcoding.
Both server and worker mount the same NAS PVC:
| Container Path | Purpose |
|---|---|
/media |
Full NAS media library (read/write) |
/temp |
Transcode scratch space (emptyDir, node-local) |
NAS NFS export: 192.168.30.10:/volume1/media — see Storage Architecture
tdarr.k3s.strommen.systems via Traefik IngressRoute in media namespaceauthentik-forward-auth from public-ingress namespace (Authentik SSO)tdarr-tls Certificate (Let's Encrypt DNS-01 via Route53)/volume1/mediaNo Prometheus exporter is configured. Tdarr does not have a community exporter comparable to exportarr.
Grafana dashboard: kubernetes/apps/media/grafana-dashboard.yaml — Intel GPU metrics (video/render engine utilization, sourced from intel_gpu_* Prometheus metrics)
See 4K Transcoding & GPU Configuration for Intel UHD 630 QSV metrics and GPU scheduling details.
After first deployment:
/media/movies or /media/tvtdarr-server:8266)| Service | Role |
|---|---|
| Jellyfin | Plays the transcoded files |
| Radarr | Source of movie files Tdarr re-encodes |
| Sonarr | Source of TV files Tdarr re-encodes |
| 4K Transcoding & GPU Configuration | Intel UHD 630 setup, QSV capabilities |
| Storage Architecture | NFS PV layout and NAS service accounts |
kubernetes/apps/media/tdarr.yaml