johanneskueber.com

Enable hardware acceleration for Jellyfin in Kubernetes - AMD Edition

Jellyfin is an open-source media server software that allows users to manage and stream their personal collection of movies, TV shows, music, and other media files. It is designed as an alternative to proprietary media server solutions like Plex and Emby, offering similar functionality but without any licensing costs or restrictions.

Jellyfin is running in my bare-metal kubernetes cluster. However, running the container without additional configuraion only gives Jellyfin access to the CPU for decoding video streams. If a GPU is available it would be better to use the GPU as it - most of the time - also has hardware acceleration for well-known codecs and in addition it takes some work off the CPU. In my case, the Athlon 3000G has an on-board Vega 3 graphics chip. Now the only thing left to do is to give Jellyfin access to the GPU.

The steps are based on the official jellyfin documentation and if you have an other CPU/GPU combination or want to use a different method, please refer to it.

Prerequisits

To allow hardware acceleration, the kubernetes pod needs access to the GPU interface. For AMD devices such as mine, we need to make sure that the VA-API is available and that Jellyfin is only run on a node that has the GPU interface available.

Check if Hardware Acceleration is available

The specific device on AMD based systems is called renderD128. To check if such a device is available on your system use the following command.

ls -l /dev/dri

The output should look something like this:

drwxr-xr-x 2 root root         80 Mar  4 20:06 by-path
crw-rw---- 1 root video  226,   0 Mar  4 20:06 card0
crw-rw---- 1 root render 226, 128 Mar  4 20:06 renderD128

renderD128 is the device we are looking for. The respective name depends on your device and configuration.

Make sure Jellyfin only runs on nodes with AMD GPU

Since we are using hostpath to mount the device into the Jellyfin pod, we need to ensure that the pod is spawned on node with the respective device. This is achieved by labeling the nodes with hardware acceleration capabiltiies and by adjusting the deployments node-selector:

apiVersion: apps/v1
kind: Deployment
metadata: 
  name: jellyfin
...
spec:
  template:
    spec:
      securityContext:
        fsGroup: 104
      containers:
        - name: jellyfin
          volumeMounts:
      nodeSelector:
        gputype: amd

Note: The label and the label value can be chosen arbitrarily.

Enable Hardware Acceleration

Mount the hardware device into the container

We need to make sure that the container has access to the hardware device.

---
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: jellyfin
spec:
  template:
    spec:
      securityContext:
        fsGroup: 104
      containers:
        - name: jellyfin
          volumeMounts:
...
          - mountPath: /dev/dri
            name: jellyfin-hardware
          securityContext:
            privileged: true
...
      volumes:
      - name: jellyfin-hardware
        hostPath:
          path: /dev/dri
...

Docker has a dedicated option, but in Kubernetes we rely on a volume mount from the hostpath to give access to the device. In addition to mounting the device we need to make sure that the pod is allowed to use the device. This is done by elevating the securityContext and by adjusting the group membership using fsGroup. The value has to match the group on the host system of the device.

Configure Jellyfin

Now that the pod is up and running, Jellyfin need to know that hardware acceleration is available. We do this by selecting the VA-API option and entering the mounted hardware device path.

Jellyfin Configuration for VA-API

In addition we need to check the supported codecs. The easiest option is to exec into the pod and use Jellyfin directly. This also verifies if the deployment has mounted everything correclty. If everything is working, the output of the following command:

usr/lib/jellyfin-ffmpeg/vainfo --display drm --device /dev/dri/renderD128

looks like this:

Trying display: drm
libva info: VA-API version 1.20.0
libva info: Trying to open /usr/lib/jellyfin-ffmpeg/lib/dri/radeonsi_drv_video.so
libva info: Found init function __vaDriverInit_1_20
libva info: va_openDriver() returns 0
vainfo: VA-API version: 1.20 (libva 2.20.0)
vainfo: Driver version: Mesa Gallium driver 23.2.1 for AMD Radeon Vega 3 Graphics (raven, LLVM 15.0.7, DRM 3.54, 6.5.0-21-generic)
vainfo: Supported profile and entrypoints
      VAProfileMPEG2Simple            : VAEntrypointVLD
      VAProfileMPEG2Main              : VAEntrypointVLD
      VAProfileVC1Simple              : VAEntrypointVLD
      VAProfileVC1Main                : VAEntrypointVLD
      VAProfileVC1Advanced            : VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointVLD
      VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
      VAProfileH264Main               : VAEntrypointVLD
      VAProfileH264Main               : VAEntrypointEncSlice
      VAProfileH264High               : VAEntrypointVLD
      VAProfileH264High               : VAEntrypointEncSlice
      VAProfileHEVCMain               : VAEntrypointVLD
      VAProfileHEVCMain               : VAEntrypointEncSlice
      VAProfileHEVCMain10             : VAEntrypointVLD
      VAProfileJPEGBaseline           : VAEntrypointVLD
      VAProfileVP9Profile0            : VAEntrypointVLD
      VAProfileVP9Profile2            : VAEntrypointVLD
      VAProfileNone                   : VAEntrypointVideoProc

The VA profiles now allow us to map everything to the configuration from Jellyfin. The official mapping from the Jellyfin team can be found here.

The final configuration for my system looks like this:

Jellyfin Configuration for Codecs and Hardware Acceleration

Check if hardware acceleration is working

Now everything - at least the hardware acceleration - should run smoothly. To check we simply start a movie of our library that is encoded with one of the above checked codecs. After a couple of seconds we can check the Jellyfin logs to see if ffmpeg has been started with the correct options:

/usr/lib/jellyfin-ffmpeg/ffmpeg -analyzeduration 200M -init_hw_device vaapi=va:/dev/dri/renderD128 -filter_hw_device va -hwaccel vaapi -hwaccel_output_format vaapi -autorotate 0 -i file:"/data/ubuntu.iso" -autoscale 0 -map_metadata -1 -map_chapters -1 -threads 0 -map 0:0 -map 0:1 -map -0:s -codec:v:0 h264_vaapi -rc_mode VBR -b:v 6919652 -maxrate 6919652 -bufsize 13839304 -force_key_frames:0 "expr:gte(t,0+n_forced*3)" -vf "setparams=color_primaries=bt709:color_trc=bt709:colorspace=bt709,scale_vaapi=format=nv12:extra_hw_frames=24" -codec:a:0 libfdk_aac -ac 2 -ab 384000 -ar 48000 -af "volume=2" -copyts -avoid_negative_ts disabled -max_muxing_queue_size 2048 -f hls -max_delay 5000000 -hls_time 3 -hls_segment_type mpegts -start_number 0 -hls_segment_filename "/config/data/transcodes/1dea319b535951a883f7dc845580d9fd%d.ts" -hls_playlist_type vod -hls_list_size 0 -y "/config/data/transcodes/1dea319b535951a883f7dc845580d9fd.m3u8

If you see a video and nothing is crashing you are good to go.


stat /posts/hardware_acceleration_kubernets_jellyfin

2024-06-25: Initial publication of the article