Recently, a new H.264 decoder, using VA-API, was merged in gst-plugins-bad.
Why another VA-based H.264 decoder if there is already
As usual, an historical perspective may give some clues.
It started when Seungha Yang implemented the GStreamer decoders for Windows using DXVA2 and D3D11 APIs.
Perhaps we need one step back and explain what are stateless decoders.
Video decoders are magic and opaque boxes where we push encoded frames, and later we’ll pop full decoded frames in raw format. This is how OpenMAX and V4L2 decoders work, for example.
Internally we can imagine those magic and opaque boxes has two main operations:
- Codec state handling
- Signal processing like Fourier-related transformations (such as DCT), entropy coding, etc. (DSP, in general)
The codec state handling basically extracts, from the stream, the frame’s parameters and its compressed data, so the DSP algorithms can decode the frames. Codec state handling can be done with generic CPUs, while DSP algorithms are massively improved through specific purpose processors.
These video decoders are known as stateful decoders, and usually they are distributed through binary and closed blobs.
Soon, silicon vendors realized they can offload the burden of state handling to third-party user-space libraries, releasing what it is known as stateless decoders. With them, your code not only has to push frames into the opaque box, but now it shall handle the codec specifics to provide all the parameters and references for each frame. VAAPI and DXVA2 are examples of those stateless decoders.
Returning to Seungha’s implementation, in order to get theirs DXVA2/D3D11 decoders, they also needed a state handler library for each codec. And Seungha wrote that library!
Initially they wanted to reuse the state handling in
gstreamer-vaapi, which works pretty good, but its internal library, from the GStreamer perspective, is over-engineered: it is impossible to rip out only the state handling without importing all its data types. Which is kind of sad.
Later, Nicolas Dufresne, realized that this library can be re-used by other GStreamer plugins, because more stateless decoders are now available, particularly V4L2 stateless, in which he is interested. Nicolas moved Seungha’s code into a library in gst-plugins-bad.
libgstcodecs provides state handling of H.264, H.265, VP8 and VP9.
Let’s return to our original question: Why another VA-based H.264 decoder if there is already one in
The quick answer is «to pay my technical debt».
As we already mentioned,
gstreamer-vaapi is big and over-engineered, though we have being simplifying the internal libraries, in particular He Junyan, has done a lot of work replacing the internal base class,
GstMiniObject. Also, this kind of projects, where there’s a lot of untouched code, it carries a lot of cargo cult decisions.
So I took the
libgstcodecs opportunity to write a simple, thin and lean, H.264 decoder, using VA new API calls (
vaExportSurfaceHandle(), for example) and learning from other implementations, such as FFMpeg and ChromeOS. This exercise allowed me to identify where are the dusty spots in
gstreamer-vaapi and how they should be fixed (and we have been doing it since then!).
Now, let me be crystal clear: GStreamer VA-API is not going anywhere. It is, right now, one of the most feature-complete implementations using VA-API, even with its integration issues, and we are working on them, particularly, Intel folks are working hard on a new AV1 decoder, enhancing encoders and adding new video post-processing features.
But, this new
vah264dec is an experimental VA-API decoder, which aims towards a tight integration with GStreamer, oriented to provide a good experience in most of the common use cases and to enhance the common
libgstcodecs library shared with other stateless decoders, looking to avoid Intel specific nuances.
These are the main characteristics and plans of this new decoder:
- It use, by default, a DRM connection to VA display, avoiding the troubles of choosing X11 or Wayland.
- It uses the first found DRM device as VA display
- In the future, users will be able to provide their custom VA display through the pipeline’s context.
- It requires libva >= 1.6
- No multiview/stereo profiles, neither interlaced streams, because
libgstcodecsdoesn’t handle them yet
- It is incompatible with gstreamer-vaapi: mixing elements might lead to problems.
- Even if memory:VAMemory is exposed, it is not handled yet by any other element yet.
- Users will get VASurfaces via mapping as GstGL does with textures.
- Caps templates are generated dynamically generated by querying VAAPI
- YV12 and I420 are added for system memory caps because they seem to be supported for all the drivers when downloading frames onto main memory, as they are used by xvimagesink and others, avoiding color conversion.
- Decoding surfaces aren’t bounded to context, so they can grow beyond the DBP size, allowing smooth reverse playback.
- There isn’t yet error handling and recovery.
- The element is supposed to spawn if different renderD nodes with VA-API driver support are found (like gstv4l2), but it hasn’t been tested yet.
Now you may be asking how do I use vah264dec?
vah264dec has NONE rank, which means that it will never be autoplugged, but you can use the trick of the environment variable
$ GST_PLUGIN_FEATURE_RANK=vah264dec:259 gst-play-1.0 ~/video.mp4
And that’s it!