CSS 3D transformations & SVG

As mentioned in my first article, I have a long relationship with the WebKit project, and its SVG implementation. In this post I will explain some exciting new developments and possible advances, and I present some demos of the state of the art (if you cannot wait, go and watch them, and come back for the details). To understand why these developments are both important and achievable now though, we’ll have to first understand some history.

Note: In order to focus on the high-level overview, many technical details are omitted or shortened – the intention of the article is to provide an overview of the status of SVG and technological advances for both developers and web authors.

Resolving the co-evolutionary branches of HTML, CSS and SVG - and why it matters

Many of the concepts that empower SVG were not present in HTML/CSS at the time when the first SVG 1.0 specification was released back in 2001. SVG requires to position and paint graphical elements in arbitrary user coordinate systems. What does that mean in practice? You can define a SVG document fragment in a coordinate system that only spans one pixel and scale it up by factor of 100 using an affine transformation. In the early 2000s none of this was possible with HTML and CSS: CSS (2D) Transforms were not born yet and HTML was laid out and painted in integer based coordinate systems.

To support SVG in a web browser the rendering pipeline needs to support fractional coordinate systems. Back in 2005 when ksvg2 was integrated into WebKit a new rendering path was defined: a floating-point number based layout and painting model, as opposed to the existing integer based layout and painting model.

History of transformation support for HTML/CSS

In 2007, WebKit pioneered support for arbitrary affine transformations on CSS boxes through the -webkit-transform CSS property. This was the beginning of a success story: CSS Transforms were born, and later formalized in CSS Transforms Module Level 1, opening up many new possibilities for web page authors. Two years later CSS 3D transformations were invented and prototyped in WebKit: a killer feature allowing many new possibilities and a whole new dimension of graphical effects for web page authors. And at the same time these new 2D/3D transform features made scripting web pages using JavaScript much harder and - back then - unpredictable.

Why? Since the whole HTML/CSS rendering engine was still integer based simple questions such as asking for the size of an CSS box might turn an integer width/height into a non-integer value, e.g. due to a rotation, or a non-integral scale value that is applied to the box. The obvious answer is: all sizes, positions, etc. need to become floating-point numbers. As often, the straight-forward solution is naive and has major drawbacks: floating-point numbers are a natural source of numerical instabilities leading to rendering artefacts such as unwanted aliasing when objects are painted on non-integral positions, not aligned anymore with device pixels.

Introduction of sub-pixel rendering for HTML/CSS

To avoid switching to floating-point numbers WebKit invented so-called “LayoutUnits“. LayoutUnit is an abstraction used to represent positions and sizes in fractions of a pixel. WebKit chose to divide each logical pixel in 64 chunks, thus representing all values as multiples of 1/64th of a CSS pixel. This allows to continue to use integer math and avoids floating-point imprecision, while still exposing sub-pixel precision to web authors.

This work started roughly in 2012, driven by Google, as it was the only way forward to properly support CSS 2D/3D transforms throughout the engine: during layout, painting and hit-testing. As you can imagine, this is a multi-year effort to carefully transition all integer based code to LayoutUnits. However, transitioning the engine from integers to LayoutUnits is only one side of the coin: all APIs exposed to the Web need to be revised in order to support non-integer locations/sizes. An old article from the IE team highlights some of the difficulties that arose: be sure to read the article to get a feeling about the complexity of the problem.

In 2013, the Apple WebKit ports enabled sub-pixel layout by default, and many regressions were fixed in the following months. At the same time specification authors worked hard on addressing the problems that arose and to change the relevant specifications where needed.

From HTML/CSS perspective many problems were properly solved at this time: sub-pixel precision is achieved, layout, painting and hit-testing works on sub-pixel precision – many APIs (such as CSS OM) now report non-integral results, that can be queried from JavaScript etc. Job done, goal achieved for HTML/CSS! But what about SVG?

Hardware accelerated web page rendering in WebKit

Before we examine how all of these things related to SVG rendering, there is one more thing to keep in mind. In the early 2000s there was no support for animations in HTML/CSS, other than scripting using JavaScript. This has changed with the introduction of CSS Animations/Transitions (e.g. end of 2007 in WebKit). These features allow web authors to animate many CSS properties, using CSS alone, not involving JavaScript.

One of the properties that can be animated is the transform (previously known as -webkit-transform) property. How would you implement animating a transform of a CSS box? In a naive implementation, you would simply re-layout the element at a new position given a time t and re-render the whole web page. This will be very time consuming, since the layout operation is non-trivial and thus not cheap at all. Furthermore, if you consider that the animated CSS box is e.g. absolutely positioned, and drawn on top of everything else there is no need to render the whole web page “below” it (ignore transparency in this simple example), since the non-animated content remains static. To make a long story short: WebKit defines layers that need to be rendered into a separated backing store. In the given example, the whole web page would be rendered into one backing store (except the animated box), and the animated box in another. When rendering the web page to the screen, only the layer that has an animated transform needs to be re-drawn, the static content remains as-is. Then, these two image buffers will be composited into one final image, that is painted on the screen – and all backed by image buffers residing in the GPU. This is efficient and we call it hardware accelerated compositing.

So HTML/CSS is laid out on sub-pixel boundaries, supports CSS 2D/3D transformations, CSS animations/transitions, all hardware-accelerated.

“One platform”: Paving the way towards a unification of the HTML/CSS & SVG rendering engine

The SVG implementation in WebKit does not support hardware acceleration composition at all. All techniques mentioned before were only implemented in the HTML/CSS rendering code path and are not applicable for SVG. You might ask yourself why it is not possible to apply these techniques to SVG? Indeed it is possible, but nobody wanted to reinvent the wheel back in 2012. Back then I hoped that the following CSS3/SVG2 specifications would formalize how to properly integrate SVG with all these CSS features like 2D/3D transformations, animations/transitions etc. If that is done, we can try to re-use the existing WebKit implementation and make it work for SVG too, instead of re-implementing all the hardware acceleration code again separately for SVG. Lesson learned from the last two decades: It is extremely difficult to maintain two different rendering code paths, and we should invest more time to unify them. Furthermore, making SVG less special is an important goal to unleash its true power: less confusing for Web authors, easier to implement for developers. Be sure to checkout the post from my colleague Brian Kardell on that topic.

Just to show one example why formalization is necessary first: Take the SVG transform attribute as example: What happens if you specify both the SVG transform attribute and the CSS transform property at the same time on a SVG element: do both transformations take effect, or does one of them override the other? What about the SVG DOM? It exposes an API to query the transformations from e.g. JavaScript – how does it respond if a CSS transform property is set? The list of open questions is huge! It required many years of hard work by the SVG and CSS working groups to define and specify the rules how all of these technologies work together.

Nowadays the answer is clear: the SVG transform attribute is mapped to the CSS transform property, as specified in CSS Transforms Module Level 1 (which is still only a W3C candidate recommendation!).

So does WebKit have a fast, fluent SVG animation engine in 2019, all hardware-accelerated? No. Why? Simply because no one attempted to unify the rendering code paths, event though from specification point of view, many open questions were answered. It is doable now, we only need a few dedicated people to tackle it.

2020: Hardware-acceleration for SVG

My first task at Igalia is to demonstrate that we can re-implement the SVG rendering engine on top of HTML/CSS, while preserving the SVG specific needs such as arbitrary precision coordinate systems (not limited to 1/64th of a CSS pixel). This will make SVG “less special” and as bonus we get all nifty features such as CSS 3D transformations for SVG for free. Furthermore, many long standing issues can be tackled related to improper foreignObject support, repainting issues when embedded SVG in HTML (or vice-versa), etc. Without going too much into details, believe me that this is a huge step forward.

The video below shows a screen cast of a reveal.js presentation that I showed during the WebKit codecamp in the last Igalia meeting in A Coruña. You can see my Proof-Of-Concept implementation using the WebKitGTK port in action, demoing for the first time: CSS 3D transformations in SVG. I am excited about this and I hope you enjoy the video.

Demo using WPE WebKit port on low-power hardware

Igalia maintains the WPE port and deeply cares about embedded systems. My fellow Igalian, Brian Kardell, immediately asked me after the demo how it would perform on low-power hardware, such as a Rasberry Pi 3. A few days later my colleague Pablo Saavedra crafted an image that allowed us to compare the Proof-Of-Concept branch with vanilla WebKit – thanks Pablo and Brian!

The first 20 seconds of the video show a demo using vanilla WebKit, where the SVG tiger is slightly rotated in 2D space. In addition a 3D perspective transformation is applied, whose effect is unobservable since it is unsupported in vanilla WebKit. The animation appears jumpy and slow - not something you can deploy in your web application (no hardware accelerated compositing!). In the next 15 seconds the same demo is executed using the Proof-Of-Concept WebKit branch and it appears smooth and fluent, as you would expect from a state-of-the-art SVG rendering engine!

Afterwards the SVG tiger is animated (skewed and rotated) using vanilla WebKit - the performance is lousy. Finally the same animation is demoed using the Proof-Of-Concept WebKit branch.

I hope you share my excitement about these developments. The code is not public yet, but will be shared soon – once I am convinced that the experiment works out, and the design of the new rendering engine is sane.

Thanks to Igalia for allowing me to work on this important aspect of SVG. Special thanks to Brian Kardell, who suggested to write about this topic, Pablo Saavedra for setting up the test hardware and the whole WebKit graphics team at Igalia for their support!

Thanks for reading the article – spread the news about these exciting SVG developments. Stay tuned!

comments powered by Disqus