Introducing the Chromium-based web runtime for the AGL platform

Igalia has been working with AGL (Automotive Grade Linux) to provide a web application runtime to their platform, based on Chromium. We delivered the first phase of this project back in January, and it’s been available since the Flounder 6.0.5 and Guppy 7.0.0 releases. This is a summary of how it came to be, and the next steps into the future.

Igalia stand showing the AGL web runtime

Continue reading

The Chromium startup process

I’ve been investigating the process of Chromium startup, the classes involved and the calls exchanged between them. This is a summary of my findings!

There are several implementations of a browser living inside Chromium source code, known as “shells”. Chrome is the main one, of course, but there are other implementations like the content_shell, a minimal browser designed to exercise the content API; the app_shell, a minimal container for Chrome Apps, and several others.

To investigate the difference between the different shell, we can start checking the binary entry point and find out how it evolves. This is a sequence diagram that starts from the content_shell main() function:

content_shell and app_shell sequence diagram

It creates two objects, ShellMainDelegate and ContentMainParams, then hands control to ContentMain() as implemented in the content module.

Chrome’s main is very similar, it also spawns a couple objects and then hands control to ContentMain(), following exactly the same code path from that point onward:

Chrome init sequence diagram

If we took a look to the app_shell, it would be very similar, and it’s probably the same for other shells, so where’s the magic? What’s the difference between the many shells in Chromium? The key is the implementation of that first object created in the main() function:

ContentMainDelegate class diagram

Those *MainDelegate objects created in main() are implementations of ContentMainDelegate. This delegate will get the control in key moments of the initialization process, so the shells can customize what happens. Two important events are the calls to CreateContentBrowserClient and CreateContentRendererClient, which will enable the shells to customize the behavior of the Browser and Render processes.

ContentBrowserClient class diagram

The diagram above shows how the ContentMainDelegate implementations provided by the different shells intantiate each their own implementation of ContentBrowserClient. This class runs in the UI thread and is able to customize the browser logic, its API is able to enable or disable certain parameters (e.g. AllowGpuLaunchRetryOnIOThread), provide delegates on certain objects (e.g. GetWebContentsViewDelegate), etc. A remarkable responsibility of ContentBrowserClient is providing an implementation of BrowserMainParts, which runs code in certain stages of the initialization.

There is a parallel hierarchy of ContentRendererClient classes, which works analogously to what we’ve just seen for ContentBrowserClient:

ContentRendererClient class diagram

The specific case of extensions::ShellContentRendererClient is interesting because it contains the details to setup the extension API:

ShellContentRendererClient class diagram

The purpose of both ExtensionsClient and ExtensionsRendererClient is to set up the extensions system. The difference lies in the knowledge of the renderer process and its concepts by ExtensionsRendererClient, only methods that make use of this knowledge should be there, otherwise they should be part of ExtensionsClient, which has a much bigger API already.
The specific implementation of ShellExtensionsRendererClient is very simple but it owns an instance of extensions::Dispatcher; this is an important class that sets up extension features on demand whenever necessary.

The investigation may continue in different directions, and I’ll try to share more report like this one. Finally, these are the source files for the diagrams and a shared document containing the same information in this report, where any comments, corrections and updates are welcome!

Chromium official/release builds and icecc

You may already be using icecc to compile your Chromium, either by following some instructions like the ones published by my colleague Gyuyoung or using the popular icecc-chromium set of scripts. In those cases, you will probably get in some trouble if you try to generate an official build with that configuration.

First, let me refresh what an “official build” is called in Chromium. You may know that build optimization in Chromium builds depends on two flags:

  • is_debug
    Debug build. Enabling official builds automatically sets is_debug to false.

  • is_official_build
    Set to enable the official build level of optimization. This has nothing
    to do with branding, but enables an additional level of optimization above
    release (!is_debug). This might be better expressed as a tri-state
    (debug, release, official) but for historical reasons there are two
    separate flags.

The GN documentation is pretty verbose about this. To sum up, to get full binary optimization you should enable is_official_build which will also disable is_debug in the background. This is what other projects would call a release build.

Back to the main topic, I was running an official build distributed via icecc and stumbled on some compilation problems:

clang: error: no such file or directory: /usr/lib/clang/7.0.0/share/cfi_blacklist.txt
clang: error: no such file or directory: ../../tools/cfi/blacklist.txt
clang: error: no such file or directory: /path/to/src/chrome/android/profiles/afdo.prof

These didn’t happen when icecc build was disabled, so I was certain to have found some limitations in the distributed compiler. The icecc-chromium set of scripts was already disabling a number of clang cleanup/sanitize tools, so I decided to take the same approach. First, I checked the GN args that could be related to these errors and identified two:

  • is_cfi
    Current value (from the default) = true
    From //build/config/sanitizers/sanitizers.gni:53

    Compile with Control Flow Integrity to protect virtual calls and casts.
    See http://clang.llvm.org/docs/ControlFlowIntegrity.html

    TODO(pcc): Remove this flag if/when CFI is enabled in all official builds.

  • clang_use_default_sample_profile
    Current value (from the default) = true
    From //build/config/compiler/BUILD.gn:117

    Some configurations have default sample profiles. If this is true and
    clang_sample_profile_path is empty, we’ll fall back to the default.

    We currently only have default profiles for Chromium in-tree, so we disable
    this by default for all downstream projects, since these profiles are likely
    nonsensical for said projects.

These two args were enabled, I just disabled them and got rid the compilation flags that were causing trouble: -fprofile-sample-use=/path/to/src/chrome/android/profiles/afdo.prof -fsanitize=cfi-vcall -fsanitize-blacklist=../../tools/cfi/blacklist.txt. I’ve learned that support for -fsanitize-blacklist is available in upstream icecc, but most distros don’t package it yet, so it’s safer to disable that.

To sum up, if you are using icecc and you want to run an official build, you have to add a couple more GN args:

clang_use_default_sample_profile = false
is_cfi = false

Updated Chromium on the GENIVI platform

I’ve devoted some of my time at Igalia to get a newer version of Chromium running on the GENIVI Development Platform (GDP).

Since the last update, there have been news regarding Wayland support in Chromium. My colleagues Antonio, Maksim and Frédéric have worked on a new Wayland backend following modern Chromium architecture. You can find more information in their own blogs and talks. I’m linking the most recent talk, from FOSDEM 2018.

Everyone can already try the new, Igalia-developed backend on their embedded devices using the meta-browser layer. I built it along with the GDP but discovered that it cannot run as it is, due to the lack of ivi-shell hooks in the new Chromium backend. This is going to be fixed in the mid-term, so I decided not to spend a lot of time researching this and chose a different solution for the current GDP release.

The LG SVL team recently announced the release of an updated Ozone Wayland backend for Chromium, based on the legacy implementation provided by Intel, as a part of the webOS OSE project. This is an update on the backend we were already running on the GDP, so it looked like a good idea to reuse their work.

I added the meta-lgsvl-browser layer to the GDP, which provides recipes for several Chromium flavors: chromium-lge is the one that builds upon the legacy Wayland backend and currently provides Chromium version 64.

The chromium-lge browser worked out-of-the-box on Raspberry Pi, but I faced some trouble with the other supported platforms. In the case of ARM64 platforms, we were finding a “relocation overflow” problem. This is something that my colleagues had already detected when trying the new Wayland backend on the R-Car gen. 3 platform, and it can be fixed by enabling compiler optimization flags for binary size.

In the case of Intel platforms, compilation failed due to a build-system assertion. It looks like Clang’s Control Flow Integrity feature is enabled by default on x64 Linux builds, but we aren’t using the Clang compiler. The solution consists just in disabling this feature, like the upstream meta-browser project was already doing.

The ongoing work is shared in this pull request. I hope to be able to make it for the next GDP release!

Finally, this week my colleague Xavi is taking part in the GENIVI All Member Meeting. If you are interested in browsers, make sure you attend his talk, “Wayland Support in Open Source Browsers“, and visit our booth during the Member Showcase! (EDIT: check out the slides in our slideshare page!)