Intel UHD 630 (i5-8500T) VFIO passthrough from Proxmox to k3s VM — enabling Quick Sync Video for Jellyfin, Plex, and Tdarr.
| Property | Value |
|---|---|
| Active GPU Node | k3s-agent-4 (VM on pve4) |
| GPU | Intel UHD 630 (CoffeeLake-S GT2), Device ID: 8086:3e92 |
| Passthrough | VFIO PCI — active since 2026-03-16 |
| Node Label | gpu=intel-uhd-630 |
| Longhorn | Disabled on k3s-agent-4 (aging NVMe on pve4) |
All other M920q hosts (pve1/pve2/pve3) have identical UHD 630 iGPUs but passthrough is not yet configured on them. See Scaling to Additional Nodes.
Run the GPU prep playbook on the target PVE host (one at a time, requires reboot):
cd ansible && source .env
ansible-playbook -i inventory/homelab playbooks/proxmox-gpu-prep.yml --limit pve4
This configures:
intel_iommu=on iommu=ptvfio, vfio_iommu_type1, vfio_pcii915, snd_hda_intelssh root@pve4
dmesg | grep -i iommu
# Should show: DMAR: IOMMU enabled
# Find iGPU PCI address
lspci -nn | grep VGA
# 00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e92]
GPU passthrough requires q35 machine type and OVMF BIOS. Adding these to an existing VM requires destroy + recreate.
Warning: Drain the node before destroying the VM:
kubectl drain k3s-agent-4 --ignore-daemonsets --delete-emptydir-data
# In k3s_agents list, for the target agent on pve4:
{
name = "k3s-agent-4"
proxmox_node = "pve4"
ip_address = "192.168.20.33/24"
cores = 12
memory = 28672
machine_type = "q35"
bios = "ovmf"
gpu_passthrough = [
{
device = "0000:00:02.0"
pcie = true
rombar = true
xvga = false
}
]
}
cd terraform/environments/homelab-prod
# Preview — will show destroy+recreate
terraform plan -target='module.k3s_agents["k3s-agent-4"]'
# Apply (destroys and recreates the VM)
terraform apply -target='module.k3s_agents["k3s-agent-4"]'
Auth requirement: The bpg/proxmox provider's
hostpciblock requiresroot@pamcredentials. API tokens cannot assign PCI devices. Verifyproxmox_username = "root@pam"in the environment'sterraform.tfvars.
After Terraform rebuilds the VM, run the GPU worker playbook:
cd ansible && source .env
ansible-playbook -i inventory/homelab playbooks/gpu-worker.yml --limit k3s-agent-4
This installs VA-API drivers and labels the node.
ssh debian@192.168.20.33
ls -la /dev/dri/
# Should show: card0, renderD128
vainfo
# Should show: Intel iHD driver, H.264/HEVC encode/decode profiles
kubectl get node k3s-agent-4 --show-labels | grep gpu
# Should show: gpu=intel-uhd-630
pve1/pve2/pve3 all have M920q nodes with UHD 630 — identical hardware. To add GPU passthrough:
kubectl drain <node> --ignore-daemonsets --delete-emptydir-dataproxmox-gpu-prep.yml --limit pve1 (or pve2, pve3) — triggers a rebootterraform.tfvars with machine_type = "q35", bios = "ovmf", gpu_passthrough = [...] for that agentterraform apply -target='module.k3s_agents["k3s-agent-N"]'gpu-worker.yml --limit k3s-agent-NEach additional GPU node gets a Tdarr Worker pod automatically (DaemonSet) and can handle Jellyfin/Plex fallback transcode via node affinity.
GPU workloads use node selectors or affinity to land on GPU nodes:
# Hard requirement (Tdarr DaemonSet)
nodeSelector:
gpu: intel-uhd-630
# Soft preference (Jellyfin/Plex — falls back to CPU transcode)
affinity:
nodeAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
preference:
matchExpressions:
- key: gpu
operator: In
values: ["intel-uhd-630"]
| Issue | Fix |
|---|---|
dmesg shows no IOMMU |
Check VT-d in BIOS. Rerun proxmox-gpu-prep.yml |
| VM won't start after GPU add | Verify IOMMU group isolation: cat /sys/bus/pci/devices/0000:00:02.0/iommu_group/devices |
/dev/dri missing in VM |
Ensure hostpci0 in VM config: qm config <vmid> |
vainfo shows no profiles |
Install intel-media-va-driver-non-free. Rerun gpu-worker.yml |
Terraform can't set hostpci |
Verify proxmox_username = "root@pam" (not API token) |
| OVMF boot fails | Ensure VM template supports UEFI or create new UEFI template |
| Node label missing after rejoin | Rerun gpu-worker.yml — it applies the node label |
| k3s-agent-4 lost K3S_TOKEN after upgrade | Restore /etc/systemd/system/k3s-agent.service.env — see Operations |