Review of Igalia Multimedia activities (2022)

We, Igalia’s multimedia team, would like to share with you our list of achievements along the past 2022.

WebKit Multimedia

WebRTC

Phil already wrote a first blog post, of a series, on this regard: WebRTC in WebKitGTK and WPE, status updates, part I. Please, be sure to give it a glance, it has nice videos.

Long story short, last year we started to support Media Capture and Streams in WebKitGTK and WPE using GStreamer, either for input devices (camera and microphone), desktop sharing, webaudio, and web canvas. But this is just the first step. We are currently working on RTCPeerConnection, also using GStreamer, to share all these captured streams with other web peers. Meanwhile, we’ll wait for the second episode of Phil’s series 🙂

MediaRecorder

We worked in an initial implementation of MediaRecorder with GStreamer (1.20 or superior). The specification goes about allowing a web browser to record a selected stream. For example, a voice-memo or video application which could encode and upload a capture of your microphone / camera.

Gamepad

While WebKitGTK already has Gamepad support, WPE lacked it. We did the implementation last year, and there’s a blog post about it: Gamepad in WPEWebkit, with video showing a demo of it.

Capture encoded video streams from webcams

Some webcams only provide high resolution frames encoded in H.264 or so. In order to support these resolutions with those webcams we added the support for negotiate of those formats and decode them internally to handle the streams. Though we are just at the beginning of more efficient support.

Flatpak SDK maintenance

A lot of effort went to maintain the Flatpak SDK for WebKit. It is a set of runtimes that allows to have a reproducible build of WebKit, independently of the used Linux distribution. Nowadays the Flatpak SDK is used in Webkit’s EWS, and by many developers.

Among all the features added during the year we can highlight added Rust support, a full integrity check before upgrading, and offer a way to override dependencies as local projects.

MSE/EME enhancements

As every year, massive work was done in WebKit ports using GStreamer for Media Source Extensions and Encrypted Media Extensions, improving user experience with different streaming services in the Web, such as Odysee, Amazon, DAZN, etc.

In the case of encrypted media, GStreamer-based WebKit ports provide the stubs to communicate with an external Content Decryption Module (CDM). If you’re willing to support this in your platform, you can reach us.

Also we worked in a video demo showing how MSE/EME works in a Raspberry Pi 3 using WPE:

WebAudio demo

We also spent time recording video demos, such as this one, showing WebAudio using WPE on a desktop computer.

GStreamer

We managed to merge a lot of bug fixes in GStreamer, which in many cases can be harder to solve rather than implementing new features, though former are more interesting to tell, such as those related with making Rust the main developing language for GStreamer besides C.

Rust bindings and GStreamer elements for Vonage Video API / OpenTok

OpenTok is the legacy name of Vonage Video API, and is a PaaS (Platform As a Service) to ease the development and deployment of WebRTC services and applications.

We published our work in Github of Rust bindings both for the Client SDK for Linux and the Server SDK using REST API, along with a GStreamer plugin to publish and subscribe to video and audio streams.

GstWebRTCSrc

In the beginning there was webrtcbin, an element that implements the majority of W3C RTCPeerConnection API. It’s so flexible and powerful that it’s rather hard to use for the most common cases. Then appeared webrtcsink, a wrapper of webrtcbin, written in Rust, which receives GStreamer streams which will be offered and streamed to web peers. Later on, we developed webrtcsrc, the webrtcsink counterpart: an element which source pads push streams from web peers, such as another browser, and forward those Web streams as GStreamer ones in a pipeline. Both webrtcsink and webrtcsrc are written in Rust.

Behavior-Driven Development test framework for GStreamer

Behavior-Driven Development is gaining relevance with tools like Cucumber for Java and its domain specific language, Gherkin to define software behaviors. Rustaceans have picked up these ideas and developed cucumber-rs. The logical consequence was obvious: Why not GStreamer?

Last year we tinkered with GStreamer-Cucumber, a BDD to define behavior tests for GStreamer pipelines.

GstValidate Rust bindings

There have been some discussion if BDD is the best way to test GStreamer pipelines, and there’s GstValidate, and also, last year, we added its Rust bindings.

GStreamer Editing Services

Though not everything was Rust. We work hard on GStreamer’s nuts and bolts.

Last year, we gathered the team to hack GStreamer Editing Services, particularly to explore adding OpenGL and DMABuf support, such as downloading or uploading a texture before processing, and selecting a proper filter to avoid those transfers.

GstVA and GStreamer-VAAPI

We helped in the maintenance of GStreamer-VAAPI and the development of its near replacement: GstVA, adding new elements such as the H.264 encoder, the compositor and the JPEG decoder. Along with participation on the debate and code reviewing of negotiating DMABuf streams in the pipeline.

Vulkan decoder and parser library for CTS

You might have heard about Vulkan has now integrated in its API video decoding, while encoding is currently work-in-progress. We devoted time on helping Khronos with the Vulkan Video Conformance Tests (CTS), particularly with a parser based on GStreamer and developing a H.264 decoder in GStreamer using Vulkan Video API.

You can check the presentation we did last Vulkanised.

WPE Android Experiment

In a joint adventure with Igalia’s Webkit team we did some experiments to port WPE to Android. This is just an internal proof of concept so far, but we are looking forward to see how this will evolve in the future, and what new possibilities this might open up.

If you have any questions about WebKit, GStreamer, Linux video stack, compilers, etc., please contact us.

GStreamer compilation with third party libraries

Suppose that you have to hack a GStreamer element which requires a library that is not (yet) packaged by your distribution, nor wrapped as a Meson’s subproject. How do you do?

In our case, we needed the latest version of

Which are interrelated CMake projects.

For these cases, GStreamer’s uninstalled development scripts can use a special directory: gstreamer/prefix. As the README.md says:

NOTE: In the development environment, a fully usable prefix is also configured in gstreamer/prefix where you can install any extra dependency/project.

This means that gstenv.py script (the responsible of setting up the uninstalled development environment) will add

  • gstreamer/prefix/bin in PATH for executable files.
  • gstreamer/prefix/lib and gstreamer/prefix/share/gstreamer-1.0 in GST_PLUGIN_PATH, for out-of-tree elements.
  • gstreamer/prefix/lib in GI_TYPELIB_PATH for GObject Introspection metadata.
  • gstreamer/prefix/lib/pkgconfig in PKG_CONFIG_PATH for third party dependencies (our case!)
  • gstreamer/prefix/etc/xdg for XDG_CONFIG_DIRS for XDG compliant configuration files.
  • gstreamer/prefix/lib and gstreamer/prefix/lib64 in LD_LIBRARY_PATH for third party libraries.

Therefore, the general idea, is to compile those third party libraries with their installation prefix as gstreamer/prefix.

In our case, Vulkan repositories are interrelated so they need to be compiled in certain order. Also, we decided, for self-containment, to clone them in gstreamer/subprojects.

Vulkan-Headers

$ cd ~/gst/gstreamer/subprojects
$ git clone git@github.com:KhronosGroup/Vulkan-Headers.git
$ cd Vulkan-Headers
$ mkdir build
$ cd build
$ cmake -GNinja -DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/home/vjaquez/gst/gstreamer/prefix ..
$ cmake --build . --install

Vulkan-Loader

$ cd ~/gst/gstreamer/subprojects
$ git clone git@github.com:KhronosGroup/Vulkan-Loader.git
$ cd Vulkan-Loader
$ mkdir build
$ cd build
$ cmake  -DCMAKE_BUILD_TYPE=Debug -DVULKAN_HEADERS_INSTALL_DIR=/home/vjaquez/gst/gstreamer/prefix DCMAKE_INSTALL_PREFIX=/home/vjaquez/gst/gstreamer/prefix ..
$ cmake --build . --install

Vulkan-Tools

$ cd ~/gst/gstreamer/subprojects
$ git clone git@github.com:KhronosGroup/Vulkan-Tools.git
$ cd Vulkan-Tools
$ mkdir build
$ cd build
$ cmake  -DCMAKE_BUILD_TYPE=Debug -DVULKAN_HEADERS_INSTALL_DIR=/home/vjaquez/gst/gstreamer/prefix DCMAKE_INSTALL_PREFIX=/home/vjaquez/gst/gstreamer/prefix ..
$ cmake --build . --install

Right now we have the Vulkan headers and the Vulkan loader pkg-config file in place. And we should be able to compile GStreamer. Right?

Not exactly, because gstenv.py only sets the environment variables for the development environment, not for GStreamer compilation. But the solution is simple, because we have all set in the proper order: just to set PKG_CONFIG_PATH when executing meson setup:

$ PKG_CONFIG_PATH=/home/vjaquez/gst/gstreamer/prefix/lib/pkgconfig meson setup --buildtype=debug build

Video decoding in GStreamer with Vulkan Video extension (part 2)

Its has been a while since I reported my tinkering with the Vulkan Video provisional extension. Now the specification will have its final release soonish, and also there has been more engagement within the open source communities, such as the work-in-progress FFMpeg implementation by Lynne (please, please, read that post), and the also work-in-progress Mesa 3D drivers both for AMD and Intel by Dave Airlie! Along with the well known NVIDIA beta drivers for Vulkan.

From our side, we have been trying to provide an open source alternative to the video parser used by the Conformance Test Suite and the NVIDIA
vk_video_samples, using GStreamer: GstVkVideoParser, which intends to be a drop-in replacement of the current proprietary parser library.

Along the way, we have sketched the Vulkan Video support in
gfxreconstruct
, for getting traces of the API usage. Sadly, its kind of bit-rotten right now, even more because the specification has changed since then.

Regarding the H.264 decoder for GStreamer, we just restarted its hacking. The merge request was moved to monorepo, but for the sake of the well needed complete re-write, we changed the branch to this one (vkh264dec). We needed to re-write it because, besides the specification updates, we have learned many things along the journey, such as the out-of-band parameters update, Vulkan’s recommendation for memory pre-allocation as much as possible, the DPB/references handling, the debate about buffer vs. slice uploading, and other friction points that Lynne has spotted for future early adopters.

The way to compile it is grab the branch and compile as usually GStreamer is compiled with meson:

meson setup builddir -Dgst-plugins-bad:vulkan-video=enabled --buildtype=debug
ninja C builddir

And run simple pipelines such as

gst-launch-1.0 filesrc location=INPUT ! parsebin ! vulkanh264dec ! fakesink -v

Our objective is to have a functional demo for the next Vulkanised in
February
. We are very ambitious, we want it to work in Linux, Windows and in many GPU as possible. Wish us luck. And happy December festivities!

GstVA H.264 encoder, compositor and JPEG decoder

There are, right now, three new GstVA elements merged in main: vah264enc, vacompositor and vajpegdec.

Just to recap, GstVA is a GStreamer plugin in gst-plugins-bad (yes, we agree it’s not a great name anymore), to differentiate it from gstreamer-vaapi. Both plugins use libva to access stateless video processing operations; the main difference is, precisely, how stream’s state is handled: while GstVA uses GStreamer libraries shared by other hardware accelerated plugins (such as d3d11 and v4l2codecs), gstreamer-vaapi uses an internal, tightly coupled and convoluted library.

Also, note that right now (release 1.20) GstVA elements are ranked NONE, while gstreamer-vaapi ones are mostly PRIMARY+1.

Back to the three new elements in GstVA, the most complex one is vah264enc wrote almost completely by He Junyan, from Intel. For it, He had to write a H.264 bitwriter which is, until certain extend, the opposite for H.264 parser: construct the bitstream buffer from H.264 structures such as PPS, SPS, slice header, etc. This API is part of libgstcodecparsers, ready to be reused by other plugins or applications. Currently vah264enc is fairly complete and functional, dealing with profiles and rate controls, among other parameters. It still have rough spots, but we’re working on them. But He Junyan is restless and he already has in the pipeline an encoder common class along with a HEVC and AV1 encoders.

The second element is vacompositor, wrote by Artie Eoff. It’s the replacement of vaapioverlay in gstreamer-vaapi. The suffix compositor is preferred to follow the name of primary video mixing (software-based) element: compositor, successor of videomixer. See this discussion for further details. The purpose of this element is to compose a single video stream from multiple video streams. It works with Intel’s media-driver supporting alpha channel, and also works with AMD Mesa Gallium, but without alpha channel (in other words, a custom degree of transparency).

The last, but not the least, element is vajpegdec, which I worked on. The main issue was not the decoder itself, but jpegparse, which didn’t signal the image caps required for the hardware accelerated decoders. For instance, VA only decodes images with SOF marker 0 (Baseline DCT). It wasn’t needed before because the main and only consumer of the parser is jpegdec which deals with any type of JPEG image. Long story short, we revamped jpegparse and now it signals sof marker, color space (YUV, RGB, etc.) and chroma subsampling (if it has YUV color space), along with comments and EXIF-like metadata as pipeline’s tags. Thus vajpegdec will expose in caps template the supported color space and chroma subsampling supported by the driver. For example, Intel supports (more or less) RGB color space, while AMD Mesa Gallium don’t.

And that’s all for now. Thanks.

From gst-build to local-projects

Two years ago I wrote a blog post about using gst-build inside of WebKit SDK flatpak. Well, all that has changed. That’s the true upstream spirit.

There were two main reason for the change:

  1. Since the switch to GStreamer mono repository, gst-build has been deprecated. The mechanism in WebKit were added, basically, to allow GStreamer upstream, so keeping gst-build directory just polluted the conceptual framework.
  2. By using gst-build one could override almost any other package in WebKit SDK. For example, for developing gamepad handling in WPE I added libmanette as a GStreamer subproject, to link a modified version of the library rather than the one in flatpak. But that approach added an unneeded conceptual depth in tree.

In order to simplify these operations, by taking advantage of Meson’s subproject support directly, gst-build handling were removed and new mechanism was set in place: Local Dependencies. With local dependencies, you can add or override almost any dependency, while flatting the tree layout, by placing at the same level GStreamer and any other library. Of course, in order add dependencies, they must be built with meson.

For example, to override libsoup and GStreamer, just clone both repositories below of Tools/flatpak/local-projects/subprojects, and declare them in WEBKIT_LOCAL_DEPS environment variable:


$ export WEBKIT_SDK_LOCAL_DEPS=libsoup,gstreamer-full
$ export WEBKIT_SDK_LOCAL_DEPS_OPTIONS="-Dgstreamer-full:introspection=disabled -Dgst-plugins-good:soup=disabled"
$ build-webkit --wpe

GstVA in GStreamer 1.20

It was a year and half ago when I announced a new VA-API H.264 decoder element in gst-plugins-bad. And it was bundled in GStreamer release 1.18 a couple months later. Since then, we have been working adding more decoders and filters, fixing bugs, and enhancing its design. I wanted to publish this blog post as soon as release 1.20 was announced, but, since the developing window is closed, which means no more new features will be included, I’ll publish it now, to create buzz around the next GStreamer release.

Here’s the list of new GstVA decoders (of course, they are only available if your driver supports them):

  • vah265dec
  • vavp8dec
  • vavp9dec
  • vaav1dec
  • vampeg2dec

Also, there are a couple new features in vah264dec (common to all gstcodecs-based H.264 decoders):

  • Supports interlaced streams (vah265dec and vampeg2dec too).
  • Added a compliance property to trick the specification conformance for lower the latency, for example, or to enable non-standard features.

But not only decoders, there are two new elements for post-processing:

  • vapostproc
  • vadeinterlace

vapostproc is similar to vaapipostproc but without the interlace operation, since it was moved to another element. The reason for this is because there are deinterlacing methods which require to hold a list of referenced frames, thus these methods are broken in vaapipostproc, and adding them would increase the complexity of the element with no need. To keep things simple it’s better to handle deinterlacing in a different element.

This is the list of filters and features supported by vapostproc:

  • Color conversion
  • Resizing
  • Cropping
  • Color balance (Intel only -so far-)
  • Video direction (Intel only)
  • Skin tone enhancement (Intel only)
  • Denoise and Sharpen (Intel only)

And, I ought to say, HDR is in the pipeline, but it will be released after 1.20.

While vadeinterlace only does that, deinterlacing. But it supports all the available methods currently in the VA-API specification, using the new way to select the field to extract, since the old one (used by GStreamer-VAAPI and FFMPEG) is a bit more expensive.

Finally, both video filters, if they cannot handle the income format, they are configured in passthrough mode.

But there are not only new elements, there’s also a new library!

Since many others elements need to share a common VADisplay in the GStreamer pipeline, the new library expose only the GstVaDisplay object by now. The new library must be thin and lean, exposing only what it’s requested by other elements, such as gst-msdk. We have pending to merge after 1.20, for example, the add of GstContext helpers, and the plan is to expose the allocators and bufferpools later.

Another huge task are encoders. After the freeze, we’ll merge the first
implementation of the H.264 encoder
, and add, in different iterations, more encoders.

As I said in the previous blog post, all these elements are ranked as none, so the won’t be autoplugged, for example by playbin. To do so, users need to export the environment variable GST_PLUGIN_FEATURE_RANK as documented.

$ GST_PLUGIN_FEATURE_RANK=vah264dec:MAX,vah265dec:MAX,vampeg2dec:MAX,vavp8dec:MAX,vavp9dec:MAX gst-play-1.0 stream.mp4

Thanks a bunch to He Junyan, Seungha Yang and Nicolas Dufresne, for all the effort and care.


Still, the to-do list is large enough. Just to share what I have in my notes:

  • Add a new upload method in glupload to interop with VA surfaces — though this hardly will be merged since it creates a circular dependency between -base and -bad.
  • vavc1dec — it might need a rewrite of vc1parse.
  • vajpegdec — it needs a rewrite of jpegparse.
  • vaalphacombine — decoding alpha channel with VA within vp9alphacodebin and vp8alphacodebin
  • vamixer — similar to compositor, glmixer or vaapioverlay, to compose a single frame from different video streams.
  • And encoders (mainly H.264 and H.265).

As a final mode, GStreamer-VAAPI has enter into maintenance mode. The general plan, without any promise or dates, is to deprecate it when most of its use cases were covered by GstVA.

Video decoding in GStreamer with Vulkan

Warning: Vulkan video is still work in progress, from specification to available drivers and applications. Do not use it for production software just yet.

Introduction

Vulkan is a cross-platform Application Programming Interface (API), backed by the Khronos Group, aimed at graphics developers for a wide range of different tasks. The interface is described by a common specification, and it is implemented by different drivers, usually provided by GPU vendors and Mesa.

One way to visualize Vulkan, at first glance, is like a low-level OpenGL API, but better described and easier to extend. Even more, it is possible to implement OpenGL on top of Vulkan. And, as far as I am told by my peers in Igalia, Vulkan drivers are easier and cleaner to implement than OpenGL ones.

A couple years ago, a technical specification group (TSG), inside the Vulkan Working Group, proposed the integration of hardware accelerated video compression and decompression into the Vulkan API. In April 2021 the formed Vulkan Video TSG published an introduction to the
specification
. Please, do not hesitate to read it. It’s quite good.

Matthew Waters worked on a GStreamer plugin using Vulkan, mainly for uploading, composing and rendering frames. Later, he developed a library mapping Vulkan objects to GStreamer. This work was key for what I am presenting here. In 2019, during the last GStreamer Conference, Matthew delivered a talk about his work. Make sure to watch it, it’s worth it.

Other key components for this effort were the base classes for decoders and the bitstream parsing libraries in GStreamer, jointly developed by Intel, Centricular, Collabora and Igalia. Both libraries allow using APIs for stateless video decoding and encoding within the GStreamer framework, such as Vulkan Video, VAAPI, D3D11, and so on.

When the graphics team in Igalia told us about the Vulkan Video TSG, we decided to explore the specification. Therefore, Igalia decided to sponsor part of my time to craft a GStreamer element to decode H.264 streams using these new Vulkan extensions.

Assumptions

As stated at the beginning of this text, this development has to be considered unstable and the APIs may change without further notice.

Right now, the only Vulkan driver that offers these extensions is the beta NVIDIA driver. You would need, at least, version 455.50.12 for Linux, but it would be better to grab the latest one. And, of course, I only tested this on Linux. I would like to thank NVIDIA for their Vk Video samples. Their test application drove my work.

Finally, this work assumes the use of the main development branch of GStreamer, because the base classes for decoders are quite recent. Naturally, you can use gst-build for an efficient upstream workflow.

Work done

This work basically consists of two new objects inside the GstVulkan code:

  • GstVulkanDeviceDecoder: a GStreamer object in GstVulkan library, inherited from GstVulkanDevice, which enables VK_KHR_video_queue and VK_KHR_video_decode_queue extensions. Its purpose is to handle codec-agnostic operations.

  • vulkanh264dec: a GStreamer element, inherited from GstH264Decoder, which tries to instantiate a GstVulkanDeviceDecoder to composite it and is in charge of handling codec-specific operations later, such as matching the parsed structures. It outputs, in the source pad, memory:VulkanImage featured frames, with NV12 color format.

So far this pipeline works without errors:

$ gst-launch-1.0 filesrc location=big_buck_bunny_1080p_h264.mov ! parsebin ! vulkanh264dec ! fakesink

As you might see, the pipeline does not use vulkansink to render frames. This is because the Vulkan format output by the driver’s decoder device is VK_FORMAT_G8_B8R8_2PLANE_420_UNORM, which is NV12 crammed in a single image, while for GstVulkan a NV12 frame is a buffer with two images, one per component. So the current color conversion in GstVulkan does not support this Vulkan format. That is future work, among other things.

You can find the merge request for this work in GStreamer’s Gitlab.

Future work

As was mentioned before, it is required to fully support VK_FORMAT_G8_B8R8_2PLANE_420_UNORM format in GstVulkan. That requires thinking about how to keep backwards compatibility. Later, an implementation of the sampler to convert this format to RGB will be needed, so that decoded frames can be rendered by vulkansink.

Also, before implementing any new feature, the code and its abstractions will need to be cleaned up, since currently the division between codec-specific and codec-agnostic code is not strict, and it must be fixed.

Another important cleanup task is to enhance the way the Vulkan headers are handled. Since the required headers files for video extensions are beta, they are not expected to be available in the system, so temporally I had to add the those headers as part of the GstVulkan library.

Then it will be possible to implement the H.265 decoder, since the NVIDIA driver also supports it.

Later on, it will be nice to start thinking about encoders. But this requires extending support for stateless encoders in GStreamer, something I want do to for the new VAAPI plugin too.

Thanks for bearing with me, and thanks to Igalia for sponsoring this work.

Review of Igalia Multimedia activities (2020/H2)

As the first quarter of 2021 has aready come to a close, we reckon it’s time to recap our achievements from the second half of 2020, and update you on the improvements we have been making to the multimedia experience on the Web and Linux in general.

Our previous reports:

WPE / WebKitGTK

We have closed ~100 issues related with multimedia in WebKitGTK/WPE, such as fixed seek issues while playback, plugged memory leaks, gardening tests, improved Flatpak-based developing work-flow, enabled new codecs, etc.. Overall, we improved a bit the multimedia’s user experience on these Webkit engine ports.

To highlight a couple tasks, we did some maintenance work on WebAudio backends, and we upstreamed an internal audio mixer, keeping only one connection to the audio server, such as PulseAudio, instead of multiple connections, one for every audio resource. The mixer combines all streams into a single audio server connection.

Adaptive media streaming for the Web (MSE)

We have been working on a new MSE backend for a while, but along the way many related bugs have appeared and they were squashed. Also many code cleanups has been carried out. Though it has been like yak shaving, we are confident that we will reach the end of this long and winding road soonish.

DRM media playback for the Web (EME)

Regarding digital protected media playback, we worked to upstream OpenCDM, support with Widevine, through RDK’s Thunder framework, while continued with the usual maintenance of the others key systems, such as Clear Key, Widevine and PlayReady.

For more details we published a blog post: Serious Encrypted Media Extensions on GStreamer based WebKit ports.

Realtime communications for the Web (WebRTC)

Just as EME, WebRTC is not currently enabled by default in browsers such as Epiphany because license problems, but they are available for custom adopters, and we are maintaining it. For example, we collaborated to upgrade LibWebRTC to M87 and fixed the expected regressions and gardening.

Along the way we experimented a bit with the new GPUProcess for capture devices, but we decided to stop the experimentation while waiting for a broader adoption of the process, for example in graphics rendering, in WPE/WebKitGTK.

GPUProcess work will be retaken at some point, because it’s not, currently, a hard requirement, since we already have moved capture devices handling from the UIProcess to the WebProcess, isolating all GStreamer operations in the latter.

GStreamer

GStreamer is one of our core multimedia technologies, and we contribute on it on a daily basis. We pushed ~400 commits, with similar number of code reviews, along the second half of 2020. Among of those contributions let us highlight the following list:

  • A lot of bug fixing aiming for release 1.18.
  • Reworked and enhanced decodebin3, the GstTranscoder
    API
    and encodebin.
  • Merged av1parse in video parsers plugin.
  • Merged qroverlay plugin.
  • Iterated on the mono-repo
    proposal, which requires consensus and coordination among the whole community.
  • gstwpe element has been greatly improved from new user requests.
  • Contributed on the new libgstcodecs library, which enables stateless video decoders through different platforms (for example, v4l2, d3d11, va, etc.).
  • Developed a new plugin for VA-API using this library, exposing H.264, H.265, VP9, VP8, MPEG2 decoders and a full featured postprocessor, with better performance, according our measurements, than GStreamer-VAAPI.

Conferences

Despite 2020 was not a year for conferences, many of them went virtual. We attended one, the Mile high video conference, and participated in the Slack workspace.

Thank you for reading this report and stay tuned with our work.

Review of Igalia Multimedia activities (2020/H1)

This blog post is a review of the various activities the Igalia Multimedia team was involved in during the first half of 2020.

Our previous reports are:

Just before a new virus turned into pandemics we could enjoy our traditional FOSDEM. There, our colleague Phil gave a talk about many of the topics covered in this report.

GstWPE

GstWPE’s wpesrc element, produces a video texture representing a web page rendered off-screen by WPE.

We have worked on a new iteration of the GstWPE demo, focusing on one-to-many, web-augmented overlays, broadcasting with WebRTC and Janus.

Also, since the merge of gstwpe plugin in gst-plugins-bad (staging area for new elements) new users have come along spotting rough areas and improving the element along the way.

Video Editing

GStreamer Editing Services (GES) is a library that simplifies the creation of multimedia editing applications. It is based on the GStreamer multimedia framework and is heavily used by Pitivi video editor.

Implemented frame accuracy in the GStreamer Editing Services (GES)

As required by the industry, it is now possible to reference all time in frame number, providing a precise mapping between frame number and play time. Many issues were fixed in GStreamer to reach the precision enough for make this work. Also intensive regression tests were added.

Implemented time effects support in GES

Important refactoring inside GStreamer Editing Services have happened to allow cleanly and safely change playback speed of individual clips.

Implemented reverse playback in GES

Several issues have been fixed inside GStreamer core elements and base classes in order to support reverse playback. This allows us to implement reliable and frame accurate reverse playback for individual clips.

Implemented ImageSequence support in GStreamer and GES

Since OpenTimelineIO implemented ImageSequence support, many users in the community had said it was really required. We reviewed and finished up imagesequencesrc element, which had been awaiting review for years.

This feature is now also supported in the OpentimelineIO GES adapater.

Optimized nested timelines preroll time by an order of magnitude

Caps negotiation, done while the pipeline transitions from pause state to playing state, testing the whole pipeline functionality, was the bottleneck for nested timelines, so pipelines were reworked to avoid useless negotiations. At the same time, other members of GStreamer community have improved caps negotiation performance in general.

Last but not least, our colleague Thibault gave a talk in The Pipeline Conference about The Motion Picture Industry and Open Source Software: GStreamer as an Alternative, explaining how and why GStreamer could be leveraged in the motion picture industry to allow faster innovation, and solve issues by reusing all the multi-platform infrastructure the community has to offer.

WebKit multimedia

There has been a lot of work on WebKit multimedia, particularly for WebKitGTK and WPE ports which use GStreamer framework as backend.

WebKit Flatpak SDK

But first of all we would like to draw readers attention to the new WebKit Flatpak SDK. It was not a contribution only from the multimedia team, but rather a joint effort among different teams in Igalia.

Before WebKit Flatpak SDK, JHBuild was used for setting up a WebKitGTK/WPE environment for testing and development. Its purpose to is to provide a common set of well defined dependencies instead of relying on the ones available in the different Linux distributions, which might bring different outputs. Nonetheless, Flatpak offers a much more coherent environment for testing and develop, isolated from the rest of the building host, approaching to reproducible outputs.

Another great advantage of WebKit Flatpak SDK, at least for the multimedia team, is the possibility of use gst-build to setup a custom GStreamer environment, with latest master, for example.

Now, for sake of brevity, let us sketch an non-complete list of activities and achievements related with WebKit multimedia.

General multimedia

Media Source Extensions (MSE)

Encrypted Media Extension (EME)

One of the major results of this first half, is the upstream of ThunderCDM, which is an implementation of a Content Decryption Module, providing Widevine decryption support. Recently, our colleague Xabier, published a blog post on this regard.

And it has enabled client-side video rendering support, which ensures video frames remain protected in GPU memory so they can’t be reached by third-party. This is a requirement for DRM/EME.

WebRTC

GStreamer

Though we normally contribute in GStreamer with the activities listed above, there are other tasks not related with WebKit. Among these we can enumerate the following:

GStreamer VAAPI

  • Reviewed a lot of patches.
  • Support for media-driver (iHD), the new VAAPI driver for Intel, mostly for Gen9 onwards. There are a lot of features with this driver.
  • A new vaapioverlay element.
  • Deep code cleanups. Among these we would like to mention:
    • Added quirk mechanism for different backends.
    • Change base classes to GstObject and GstMiniObject of most of classes and buffers types.
  • Enhanced caps negotiation given current driver’s constraints

Conclusions

The multimedia team in Igalia has keep working, along the first half of this strange year, in our three main areas: browsers (mainly on WebKitGTK and WPE), video editing and GStreamer framework.

We worked adding and enhancing WebKitGTK and WPE multimedia features in order to offer a solid platform for media providers.

We have enhanced the Video Editing support in GStreamer.

And, along these tasks, we have contribuited as much in GStreamer framework, particulary in hardware accelerated decoding and encoding and VA-API.

New VA-API H.264 decoder in gst-plugins-bad

Recently, a new H.264 decoder, using VA-API, was merged in gst-plugins-bad.

Why another VA-based H.264 decoder if there is already gstreamer-vaapi?

As usual, an historical perspective may give some clues.

It started when Seungha Yang implemented the GStreamer decoders for Windows using DXVA2 and D3D11 APIs.

Perhaps we need one step back and explain what are stateless decoders.

Video decoders are magic and opaque boxes where we push encoded frames, and later we’ll pop full decoded frames in raw format. This is how OpenMAX and V4L2 decoders work, for example.

Internally we can imagine those magic and opaque boxes has two main operations:

  • Codec state handling
  • Signal processing like Fourier-related transformations (such as DCT), entropy coding, etc. (DSP, in general)

The codec state handling basically extracts, from the stream, the frame’s parameters and its compressed data, so the DSP algorithms can decode the frames. Codec state handling can be done with generic CPUs, while DSP algorithms are massively improved through specific purpose processors.

These video decoders are known as stateful decoders, and usually they are distributed through binary and closed blobs.

Soon, silicon vendors realized they can offload the burden of state handling to third-party user-space libraries, releasing what it is known as stateless decoders. With them, your code not only has to push frames into the opaque box, but now it shall handle the codec specifics to provide all the parameters and references for each frame. VAAPI and DXVA2 are examples of those stateless decoders.

Returning to Seungha’s implementation, in order to get theirs DXVA2/D3D11 decoders, they also needed a state handler library for each codec. And Seungha wrote that library!

Initially they wanted to reuse the state handling in gstreamer-vaapi, which works pretty good, but its internal library, from the GStreamer perspective, is over-engineered: it is impossible to rip out only the state handling without importing all its data types. Which is kind of sad.

Later, Nicolas Dufresne, realized that this library can be re-used by other GStreamer plugins, because more stateless decoders are now available, particularly V4L2 stateless, in which he is interested. Nicolas moved Seungha’s code into a library in gst-plugins-bad.

Currently, libgstcodecs provides state handling of H.264, H.265, VP8 and VP9.

Let’s return to our original question: Why another VA-based H.264 decoder if there is already one in gstreamer-vaapi?

The quick answer is «to pay my technical debt».

As we already mentioned, gstreamer-vaapi is big and over-engineered, though we have being simplifying the internal libraries, in particular He Junyan, has done a lot of work replacing the internal base class, GstVaapiObject, withGstObject or GstMiniObject. Also, this kind of projects, where there’s a lot of untouched code, it carries a lot of cargo cult decisions.

So I took the libgstcodecs opportunity to write a simple, thin and lean, H.264 decoder, using VA new API calls (vaExportSurfaceHandle(), for example) and learning from other implementations, such as FFMpeg and ChromeOS. This exercise allowed me to identify where are the dusty spots in gstreamer-vaapi and how they should be fixed (and we have been doing it since then!).

Also, this opportunity lead me to learn a bit more about the H.264 specification since I implemented the reference picture list handling, and fixed a small bug in Chromium.

Now, let me be crystal clear: GStreamer VA-API is not going anywhere. It is, right now, one of the most feature-complete implementations using VA-API, even with its integration issues, and we are working on them, particularly, Intel folks are working hard on a new AV1 decoder, enhancing encoders and adding new video post-processing features.

But, this new vah264dec is an experimental VA-API decoder, which aims towards a tight integration with GStreamer, oriented to provide a good experience in most of the common use cases and to enhance the common libgstcodecs library shared with other stateless decoders, looking to avoid Intel specific nuances.

These are the main characteristics and plans of this new decoder:

  • It use, by default, a DRM connection to VA display, avoiding the troubles of choosing X11 or Wayland.
    • It uses the first found DRM device as VA display
    • In the future, users will be able to provide their custom VA display through the pipeline’s context.
  • It requires libva >= 1.6
  • No multiview/stereo profiles, neither interlaced streams, because libgstcodecs doesn’t handle them yet
  • It is incompatible with gstreamer-vaapi: mixing elements might lead to problems.
  • Even if memory:VAMemory is exposed, it is not handled yet by any other element yet.
    • Users will get VASurfaces via mapping as GstGL does with textures.
  • Caps templates are generated dynamically generated by querying VAAPI
  • YV12 and I420 are added for system memory caps because they seem to be supported for all the drivers when downloading frames onto main memory, as they are used by xvimagesink and others, avoiding color conversion.
  • Decoding surfaces aren’t bounded to context, so they can grow beyond the DBP size, allowing smooth reverse playback.
  • There isn’t yet error handling and recovery.
  • The element is supposed to spawn if different renderD nodes with VA-API driver support are found (like gstv4l2), but it hasn’t been tested yet.

Now you may be asking how do I use vah264dec?

Currently vah264dec has NONE rank, which means that it will never be autoplugged, but you can use the trick of the environment variable GST_PLUGIN_FEATURE_RANK:

$ GST_PLUGIN_FEATURE_RANK=vah264dec:259 gst-play-1.0 ~/video.mp4

And that’s it!

Thanks.