GStreamer-VAAPI 1.10 (now 1.10.1)

A lot of things have happened last month and I have neglected this report. Let me try to fix it.

First, the autumn GStreamer Hackfest and the GStreamer Conference 2016 in Berlin.

The hackfest’s venue was the famous C-Base, where we talked with friends and colleagues. Most of the discussions were leaned towards the release 1.10. Still, Julien Isorce and I talked about our approaches for DMABuf sharing with downstream; we agreed on to explore how to implement an allocator based on GstDmaBufAllocator. We talked with Josep Torra about the performance he was getting with gstreamer-vaapi encoders (and we already did good progress on this). Also we talked with Nicolas Dufresne about v4l2 and kmssink. And many other informal chats along with beer and pizza.

GStreamer Hackfest a the C-Base (from @calvaris tweet)
GStreamer Hackfest a the C-Base (from @calvaris tweet)

Afterwards, in the GStreamer Conference, I talked a bit about the current status of GStreamer-VAAPI and what we can expect from release 1.10. Here’s the video that the folks from Ubicast recorded. You can see the slides here too.

I spent most of the conference with Scott, Sree and Josep tracing the bottlenecks in the frame uploading to VA Intel driver, and trying to answer the questions of the folks who gently approached me. I hope my answers were any helpful.

And finally, the first of November, the release 1.10 hit the streets!

As it is already mentioned in the release notes, these are the main new features you can find in GStreamer-VAAPI 1.10

  • All decoders have been split, one plugin feature per codec. So far, the available ones, depending on the driver, are: vaapimpeg2dec, vaapih264dec, vaapih265dec, vaapivc1dec, vaapivp8dec, vaapivp9dec and vaapijpegdec (which already was split).
  • Improvements when mapping VA surfaces into memory, differenciating from negotiation caps and allocations caps, since the allocation memory for surfaces may be bigger than one that is going to be mapped.
  • vaapih265enc got support to constant bitrate mode (CBR).
  • Since several VA drivers are unmaintained, we decide to keep a white list with the va drivers we actually test, which is mostly the i915 and, in some degree, gallium from mesa project. Exporting the environment variable GST_VAAPI_ALL_DRIVERS disable the white list.
  • The plugin features are registered, in run-time, according to its support by the loaded VA driver. So only the decoders and encoder supported by the system are registered. Since the driver can change, some dependencies are tracked to invalidate the GStreamer registry and reload the plugin.
  • DMA-Buf importation from upstream has been improved, gaining performance.
  • vaapipostproc now can negotiate through caps the buffer transformations.
  • Decoders now can do reverse playback! But they only shows I frames, because the surface pool is smaller than the required by the GOP to show all the frames.
  • The upload of frames onto native GL textures has been optimized too, keeping a cache of the internal structures for the offered textures by the sink, improving a lot the performance with glimagesink.

This is the short log summary since branch 1.8:

  1  Allen Zhang
 19  Hyunjun Ko
  1  Jagyum Koo
  2  Jan Schmidt
  7  Julien Isorce
  1  Matthew Waters
  1  Michael Olbrich
  1  Nicolas Dufresne
 13  Scott D Phillips
  9  Sebastian Dröge
 30  Sreerenj Balachandran
  1  Stefan Sauer
  1  Stephen
  1  Thiago Santos
  1  Tim-Philipp Müller
  1  Vineeth TM
154  Víctor Manuel Jáquez Leal

Thanks to Igalia, Intel Open Source Technology Center and many others for making GStreamer-VAAPI possible.

VA-API and DRM/KMS in MinnowBoard

In 2012 I started to work on a video renderer for GStreamer which uses directly the DRM/KMS kernel subsystem to display images. I even blogged about it, but I didn’t finished it.

Nonetheless, in December last year, a customer asked us to finish the element, but this time focusing on i.MX6, rather than OMAP4. Consequently, early this year, kmssink got merged into upstream.

The video sink has some nice features, such as DMA-buf importing through the PRIME kernel interface. This feature makes possible a zero-copy path when the video decoder delivers dmabuf based frames. For this particular project, the kernel we used supports the CODA media subsystem, and through the GStreamer element v4l2videodec we could link pipelines negotiating dmabuf sharing, and thus we played videos very efficiently.

During the last GStreamer Hackfest, Nicolas Dufresne got working the MFC media subsystem, for Exynos, with v4l2videodec and kmssink, but I don’t remember if he also got the dmabuf sharing working.

All in all, kmssink had only been tested in a couple of ARM devices, so I wonder, as a gstremer-vaapi co-maintainer, if it also works in x86 devices.

Some colleagues at Igalia, use a Minnowboard to test WebKit4Wayland, so I borrowed it, installed a minimal Fedora 23 in it, and, surprisingly, I found out that it has a nice VA-API support:

   $ vainfo
   error: can't connect to X server!
   libva info: VA-API version 0.38.1
   libva info: va_getDriverName() returns 0
   libva info: Trying to open /usr/lib64/dri/i965_drv_video.so
   libva info: Found init function __vaDriverInit_0_38
   libva info: va_openDriver() returns 0
   vainfo: VA-API version: 0.38 (libva 1.6.2)
   vainfo: Driver version: Intel i965 driver for Intel(R) Bay Trail - 1.6.2
   vainfo: Supported profile and entrypoints
         VAProfileMPEG2Simple            : VAEntrypointVLD
         VAProfileMPEG2Simple            : VAEntrypointEncSlice
         VAProfileMPEG2Main              : VAEntrypointVLD
         VAProfileMPEG2Main              : VAEntrypointEncSlice
         VAProfileH264ConstrainedBaseline: VAEntrypointVLD
         VAProfileH264ConstrainedBaseline: VAEntrypointEncSlice
         VAProfileH264Main               : VAEntrypointVLD
         VAProfileH264Main               : VAEntrypointEncSlice
         VAProfileH264High               : VAEntrypointVLD
         VAProfileH264High               : VAEntrypointEncSlice
         VAProfileH264StereoHigh         : VAEntrypointVLD
         VAProfileVC1Simple              : VAEntrypointVLD
         VAProfileVC1Main                : VAEntrypointVLD
         VAProfileVC1Advanced            : VAEntrypointVLD
         VAProfileNone                   : VAEntrypointVideoProc
         VAProfileJPEGBaseline           : VAEntrypointVLD


Afterwards, I compiled GStreamer using gst-uninstalled in order to have kmssink and the master branch of gstreamer-vaapi. And got it! I had hardware accelerated decoding with VA-API, and video rendering using KMS/DRM. But, sadly, no zero copy, since, in order to have dmabuf sharing, we have to finish and to merge bug 755072 in gstreamer-vaapi.

Running the typical Big Bunny video (1080p, H.264) the playback consumes less than the 50% of the CPU usage, compared with the 100% of CPU usage with software decoders (and a lot of frames dropped).

I recorded a small video with the experiment as a proof.

Update: Javer Martinez told me in twitter, that MFC can do dma-buf export since uses vb2 and Exynos DRM can import, but he couldn’t get zero copy to work due missing format support.

GStreamer Hackfest 2016

Yes, it happened again: the Gstreamer Spring Hackfest 2016! This time in the beautiful city of Thessaloniki. Thanks a lot, Vivia and Sebastian, for making it happen.

Thessaloniki
Thessaloniki’s promenade

My objective this time was to work with dma-buf support in gstreamer-vaapi. Though it is supported already, it needs a major clean up, and to extend its usage for downstream buffers (bugs 755072 and 765435).

In the way I learned that we need to update our internal API (called `libgstvaapi`), when handling dma-buf, to support mult-plane formats.

On the other hand, Nicolas Dufresne and I talked a bit about kmssink, libdrm and dma-buf. He managed to hack his Odroid U (Exynos3) to enable its V4L2 mem2mem video decoder and share buffers with kmssink. It was amazing. By the way, he promised me to write a blog post with the instructions to replicate his deed.

GStreamer Hackfest
Photo by Luis de Bethencourt

Finally, we had a preview of Edward Hervey‘s decodebin3. It is fun his test of switching the different audio streams in a media container (the different available dubbings) in every second or less. It was truly a multi-language audio!

In the meantime, we shared beers and meals, learning and laughing.

gstreamer-vaapi 1.8: the codec split

On march 23th GStreamer 1.8 was released, along with all its bundled modules, and, of course, one of those modules is gstreamer-vaapi.

Let us talk about this gstreamer-vaapi release, since there are several sweets!

First thing to notice is that the encoders have been renamed. Before they followed the pattern vaapiencode_{codec}, now they follow the pattern vaapi{codec}enc. The purpose of this change is twofold: to fix the plugins gtk-docs and to keep the usual element names in GStreamer. The conversion table is this:

Old New
vaapiencode_h264 vaapih264enc
vaapiencode_h265 vaapih265enc
vaapiencode_mpeg2 vaapimpeg2enc
vaapiencode_jpeg vaapijpegenc
vaapiencode_vp8 vaapivp8enc

But those were not the only name changes, we also have split the vaapidecode. Now we have a vaapijpegdec, which only decodes JPEG images, while keeping the old vaapidecode for video decoding. Also, vaapijpegdec was demoted to a marginal rank, because there are some problems in the Intel VA driver (which is the only one which supports JPEG decoding right now).

Note that in future releases, all the decoders will be split by codec, just as we did the JPEG decoder now; but first, we need to modify vaapidecodebin to choose a decoder in run-time based on the negotiated caps.

Please, update your scripts and code accordingly.

There are a ton of enhancements and optimizations too. Let me enumerate some of them: Vineeth TM fixed several memory leaks, and some compilations issues; Tim enabled vaapisink to send unhandled keyboard or mouse events to the application, making the usage of apps like gst-play-1.0 or apps based on GstPlayer be more natural; Thiago fixed the h264/h265 parsers, meanwhile Sree fixed the vp9 and the h265 ones too; Scott also fixed the h265 parser; et cetera. As may you see, H265/HEVC parser has been very active lately, it is the new thing!

I have to thank Sebastian Dröge, he did all the release work and also fixed a couple compilation issues.

This is the short log summary since 1.6:

 2  Lim Siew Hoon
 1  Scott D Phillips
 8  Sebastian Dröge
 3  Sreerenj Balachandran
 5  Thiago Santos
 1  Tim-Philipp Müller
 8  Vineeth TM
16  Víctor Manuel Jáquez Leal

GStreamer VA-API under the umbrella of GStreamer

We have a new GStreamer VA-API release: 1.6.0!

Wait a minute“, you might say, “weren’t the last release 0.7?“, and you will be correct; but something big has happened: GStreamer VA-API is now part of the official GStreamer project!

And this means a couple changes.

First of all, the official repository of the project has been moved, now it’s co-hosted along with the rest of the GStreamer components, in freedesktop.org (fdo):

Anonymous Git repository: git://anongit.freedesktop.org/gstreamer/gstreamer-vaapi

Developer Git repository: ssh://username@git.freedesktop.org/git/gstreamer/gstreamer-vaapi

Web Git gateway: https://cgit.freedesktop.org/gstreamer/gstreamer-vaapi

Second, the bug tracking has also moved, since now GStreamer VA-API is now a component of GStreamer, the new bugs must be filled here:

https://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer&component=gstreamer-vaapi

And the bug list is here:

https://bugzilla.gnome.org/buglist.cgi?component=gstreamer-vaapi&product=GStreamer

What will happen with the old bugs?” you will ask. Well, we will move them as soon as we have reviewed all them.

The third change, as you already noticed, is the version scheme. Now we are following the GStreamer version numbering to avoid confusion and to simplify our development. Hence, this release, 1.6.0, supports the current GStreamer stable version (1.6.3), and our current development branch is the 1.7.x. The future releases will follow the GStreamer version schemes and dependencies.

Sweet! but, what’s new in this release?“. The answer is, not much. Really. Most of the changes are related with the coupling with the upstream processes (autoconf setup, documentation, etc.). Perhaps the most remarkable thing is the removal of the support libraries (libgstvaapi-*.so) used by the vaapi plugin: now they are compiled as one static library and linked to the GStreamer’s plugin. Also, the custom parsers were removed, and the plugin and elements documentation got better shape.

At code level, we had to push a huge indentation commit, in order to align with the GStreamer code style. This commit artificially kills the blame history, but it was our better option.

I ought to say that those were not the only changes at code level, Michael Olbrich fixed a missing frame release in the Wayland backend. And Sree, as usual, fixed a bunch of hardcore stuff. But specially I want to thank Tim-Philipp Müller, for helping us along the upstreaming process. And obviously to the Intel Open Source Technology Center, for let this happen.

Here’s the git’s short log summary since the last 0.7.0 release:

 1  Joel Holdsworth
 1  Michael Olbrich
 9  Sreerenj Balachandran
 4  Tim-Philipp Müller
42  Víctor Manuel Jáquez Leal

By the way, Igalia is hiring!

GStreamer VA-API 0.7.0

GStreamer VA-API 0.7.0 is here! As is usually said, “grab it while it is fresh”, specially the distributions.

In a previous blog post we explained in detail what is GStreamer VA-API. Also, last October, we talked about it in the GStreamer Conference 2015; you can glance the slides here.

This release includes three major changes:

  • Added support for VP9 decoding.
  • Improved a lot the HEVC decoder.
  • Some fixes in the integration with OpenGL platforms.

Please note that VP9 decoding is only supported from Braswell and Skylake onwards, using the Intel’s hybrid driver for libva, which is a subset of a VA-API backend, and it can be plugged to Intel’s VA-API backend (adding the –hybrid-codec parameter to the intel-driver’s configure script).

HEVC (a.k.a. H265) decoding is more stable and has a better performance. And, remember, it is only available in Skylake and Cherry View chipsets).

Now, OpenGL3 is handled correctly through GstGLTextureUploadMeta.

But there are a lot of fixes more, improvements and enhancements, such as better handling of H264 corrupted streams, better GstContext handling, the enabling of vaapidecodebin (finally!), etc.

Here’s the git’s short log summary since the last 0.6.0 release.

 5  Gwenole Beauchesne
 1  Jan Schmidt
 2  Lim Siew Hoon
 1  Mark Nauwelaerts
 1  Olivier Crete
61  Sreerenj Balachandran
72  Víctor Manuel Jáquez Leal

GStreamer Conference 2015

Last October 8th and 9th, the GStreamer Conference 2015 had happened in Dublin. It was plenty of interesting talks and colleagues. I really enjoyed it.

I participated with our latest work in gstreamer-vaapi.  You can glance the slides of my talk.

GStreamer VA-API talk

By the way, if you are in Dublin, let me recommend you to visit the Chester Beatty Library. There are a lot of antique, beautiful and rare books

GStreamer VA-API: A new release!

A new release of gstreamer-vaapi is now available!! Since we have been working on it for the last months,  I would like talk you about it.

gstreamer-vaapi is a set of GStreamer plugins and libraries for hardware accelerated video processing using VA-API.

VA-API stands for “Video Acceleration – Application Programming Interface”, and is, simultaneously, a specification of an application programming interface, and its library implementation under the Open Source MIT license, whose purpose is to offer applications access to the GPU accelerated video processing capabilities, offloading those tasks from the CPU. Accelerated processing includes video decoding, sub-picture blending and rendering.

The library implementation (libva) is designed to be a front-end for many different back-ends. I am aware only of these back-ends:

As you can notice, only the Intel driver is well maintained, meanwhile the others haven’t been updated since several years ago. And of course, this issue has impacted on the bugs handled in gstreamer-vaapi (for example, bug #749554).

VA-API is designed to use various entry-points of the hardware accelerated processing in the GPU driver, such as VLD (also known as slice level acceleration), iDCT, among others.

The fact that VA-API uses slice level decode is important, since it is different from the software-based video decoders, were the decoding is at frame level. As consequence, more information is required from the current video parsers (bug #691712).

But let us back on track. VA-API, as we already said, is a set of GStreamer elements (vaapidecode, vaapipostroc, vaapisink, and several encoders) and a GObject-friendly library named libgstvaapi. This library wraps libva under a GObject/GStreamer semantics. I might talk about this library in another opportunity.

vaapisink is video sink with support for multiple display server protocols:

UPDATE (2015/07/20): As Sree commented, vaapisink does not use GLX/EGL for rendering, it uses VA-API, but rather vaapidecode/vaapipostproc can deal with GLX/EGL contexts so they can share buffers with other OpenGL-based elements.

This element is the most efficient way to render the video processed by other VA-API elements such as decoders and post-processors, since no media transformations are required. If we want to render the stream in other video sinks, some extra operations might be necessary. The worst case scenario is where we need to download the whole image from the GPU memory onto the CPU one, and the overall performance of the pipeline might be impacted.

vaapidecode is single decoding element that handles several codecs, depending on the back-end and the hardware available. The comprehensive list of possible enabled codecs is:

  • MPEG-2 (simple and main profiles)
  • MPEG-4 (simple, main and advance-simple profiles) / DivX / Xvid
  • H263 (baseline profiles)
  • H264 (baseline, constrained-baseline, main, high, multiview-high and stereo-high profiles)
  • H265 (main and main10)
  • WMV3 (simple and main profiles) / VC1
  • VP8 (version 0.3)
  • JPEG (baseline profile)

In the particular case of Intel chip-sets, the next table shows the codec support per each generation:

CHIPSET MPEG2 H264 H265 VC1 VP8 JPEG
Skylake Y Y Y Y Y Y
Cherry View Y Y Y Y Y Y
Broadwell Y Y Y Y
Haswell Y Y Y Y
Ivybridge Y Y Y Y
Sandybridge Y Y Y
Ironlake Y Y
G4x Y

In the case of H264, there are multiple profiles, and every chip-set offers different profile sets. But for now, I will not dig in more on it.

In other back-ends the support shall be different. For example, I have a box with an GPU NVidia GeForce GT which supports MPEG2, MPEG4, H264 and VC1.

There is a branch for G45 GPU, where H264 decoding support is added. Nevertheless, is not official, it doesn’t merge with current master anymore, and the supported output color space is NV12 only (bug #745660).

An interesting thing to know is because of the slice level decoding, gstreamer-vaapi does its own parsing inside the decoder, even if a video parser was plugged before.

vaapipostproc is a video transform element which can do color space conversions, de-interlacing, sharpening and many other types of video filtering, such as:

  • Color conversion
  • Resize / Scale
  • Noise reduction
  • De-interlacing
  • Sharpening
  • Color Balance
  • Skin tone enhancement

Also, the availability of this elements and its filters, depends on the back-end and the hardware. For example, the VDPAU back-end doesn’t provide any post-processing capability.

In the case of Intel’s chip-sets these are the provided post-processing capabilities:

CHIPSET Format Noise Deinterlace Sharp C.B. STE
Skylake Y Y Y Y Y Y
Cherry View Y Y Y Y Y Y
Broadwell Y Y Y Y Y Y
Haswell Y Y Y Y Y Y
Ivybridge Y Y Y
Sandybridge Y Y Y
Ironlake Y
G4x

In the case of the video encoders, they are split in different elements. Those implemented in gstreamer-vaapi are:

  • vaapiencode_mpeg2
  • vaapiencode_h264
  • vaapiencode_h265
  • vaapiencode_vp8
  • vaapiencode_jpeg

As far as I know, the only VA-API back-end that provides encoder is the Intel one, and this is the table of the encoders per chip-set:

CHIPSET MPEG2 H264 H265 VP8 JPEG
Skylake Y Y Y Y Y
Cherry View Y Y Y
Broadwell Y Y
Haswell Y Y
Ivybridge Y Y
Sandybridge Y
Ironlake
G4x

vaapidecodebin is a new bin composed by vaapidecode a queue and vaapipostproc. Its purpose is bundle a complete decoding solution, with de-interlacing support. The purpose of the queue is to set the decoder to its full speed. The problem is, that since each back-end and chip-set might or might not support vaapipostproc we have to check for it in run-time (bug #749554).

(-------------------------------------------------------)
|                     vaapidecodebin                    |
|   (-------------)    (-------)    (---------------)   |
|-->| vaapidecode |--->| queue |--->| vaapipostproc |-->|
|   (-------------)    (-------)    (---------------)   |
|                                                       |
(-------------------------------------------------------)

In my opinion, this should be just a temporal workaround meanwhile we have auto-plugging support of de-interlacers (bug #687182).

So far we have explained what is GStreamer VA-API. Let us now talk about what is new and hot in this release.

This is the short log summary since the last release, 0.5.10:

 1  Adrian Cox
 2  Alban Browaeys
30  Gwenole Beauchesne
 1  Jacobo Aragunde Pérez
 2  Jan Schmidt
 1  Julien Isorce
 1  Lim Siew Hoon
 1  Martin Sherburn
 4  Michael Olbrich
12  Olivier Crete
 3  Simon Farnsworth
74  Sreerenj Balachandran
75  Víctor Manuel Jáquez Leal
 1  Wind Yuan

The major changes in this release includes:

  • HEVC (H265) decoding and encoding support (available only in Skylake and Cherry View chip-sets)
  • VP8 encoder (Skylake)
  • JPEG encoder (Skylake and Cherry View)
  • vaapidecodebin element, which we have talked about above.
  • Support for EGL either in vaapidecode, vaapipostproc and vaapisink.
  • Skin tone enhancement support in vaapipostproc
  • Support for H.264 Multiview High profile encoding with more than 2 views, which encompasses Jan Schmidt’s efforts for stereoscopic / multi-view support in GStreamer (bug #611157)
  • A lot of improvements and round rough corners all over the place. Only to mention that we closed more than 70 bug reports along this cycle

Now, please test this new release, enjoy it, and if you find something ugly, let us known!!!

GStreamer Hackfest 2015

Last weekend was the GStreamer Hackfest in Staines, UK, in the Samsung’s premises, who also sponsored the dinners and the lunches. Special thanks to Luis de Bethencourt, the almighty organizer!

My main purpose was to sip one or two pints with the GStreamer folks and, secondarily, to talk about gstreamer-vaapi, WebKitGTK+ and the new OpenGL/ES support in gst-plugins-bad.

15030008

About gstreamer-vaapi, there were a couple questions about some problems shown in downstream (stable releases in distributions) which I was happy to announce that they are mostly fixed in upstream. On the other hand, Sebastian Drödge was worried about the existing support of GStreamer 0.10 and I answered him that its removal is already in the pipeline. He looked pleased.

Related with gstreamer-vaapi and the new GstGL, we tested and merged a patch for GLES2/EGL, so now it is possible to render VA-API decoded video through glimagesink with (nearly) zero-copy. Sadly, this is not currently possible using GLX. Along the way I found a silly bug that came from a previous patch of mine and fixed it; also, we fixed other small bug in the gluploader .

In the WebKitGTK+ realm, I worked on a new functionality: to share the OpenGL context and the display of the browser with the GStreamer pipeline. With it, we could add gl filters into the pipeline. But honour to whom honour is due: this patch is a split of a previous patch done by Philippe Normand. The ultimate goal is to ditch the custom video sink in WebKit and reuse the glimagesink, with it’s new off-screen rendering feature.

Finally, on Sunday’s afternoon, I walked around Richmond and it is beautiful.

15030009

Thanks to Igalia, Intel and all the sponsors  that make possible the hackfest and my attendance.

Munich

I spent a week in Munich. I went there for two reasons: to attend the Web & TV workshop organized by the W3C and to hack along with the gst-gang in the GStreamer Hackfest 2014. All these sponsored by Igalia, my company.

I arrived to Munich on Tuesday evening, and when I reached the Marienplatz metro station, I ran across with a crowd of Bayern Munich fans, chanting songs about the glory of their team, huddling and dancing. And a lot of police officers surrounding the tracks.

The workshop was organized by the W3C Web and TV Interest Group, and intended to spark discussions around how to integrate and standardize TV technologies and the Web.

The first slide of the workshop
The first slide of the workshop

On Wednesday morning, the workshop began. People from Espial and Samsung talked about HbbTV, and japanese broadcasters talked about their Hybridcast. Both technologies seek to enhance the television experience, using the Internet Protocols, the first for Europe, and the former for Japan. Also, broadcasters from Chine showed their approach using ad-hoc technologies. I have to say that Hybricast awed me.

 

Afterwards, most of the workshop was around the problem of the companion device. People showed their solutions and proposals, in particular about device discovering, and data sharing and control. All the solutions relied on WebSockets and WebRTC for the data sharing between devices.

 

During the panels, I enjoyed a lot the participation of Jon Piesing, in particular his slide summarizing the specifications used by the HbbTV V2. It’s like juggling specs!

Specifications used by HbbTV V2
Specifications used by HbbTV V2

Broadcaster are very interested in Encrypted Media Extension and Media APIs, for example the specification for a Tuner API. Also there’s a lot of expectation about meta-data handling and defining common TV ontologies.

Finally, there were a couple talks about miscellaneous technologies surrounding the IPTV broadcasting.

The second stage of my visit to Bavaria’s Capital, was the GStreamer Hackfest. It was in the Google Offices, near to the Marienplatz.

Christan Schaller has made a very good summary of what appened along the hackfest. From my side, I worked with Nicolas Dufresne with the v4l2 video converter for the Exynos4, which is a piece required for the hardware acceleration decoding for that platform using v4l2 video decoder.