{	"version": "https://jsonfeed.org/version/1.1",
	"title": "José Dapena blog",
	"language": "en",
	"home_page_url": "https://blogs.igalia.com/dape/",
	"feed_url": "https://blogs.igalia.com/dape/feed/feed.json",
	"description": "José Dapena blog | Chromium, WebPerf &amp; Open Source",
	"author": {
		"name": "José Dapena Paz",
		"url": "https://blogs.igalia.com/dape"
	},
	"items": [
		{
			"id": "https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/",
			"url": "https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/",
			"title": "The implementation of Container Timing: aggregating paints in Blink",
			"content_html": "<p>Measuring paint performance is a balancing act: you need precision, but the measurement itself can’t slow things down.</p>\n<p>In my <a href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">previous post</a>, I introduced <strong>Container Timing</strong>, a new web API allowing developers to measure the rendering performance of DOM subtrees. Today, I will dive into the technical details of how I implemented this in <strong>Blink</strong>, the rendering engine used by Chromium.</p>\n<h2 id=\"the-architecture-hooking-into-paint\" tabindex=\"-1\">The Architecture: Hooking into Paint <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>In Blink, the rendering pipeline goes through several stages: Style, Layout, Paint, and Composite. The Container Timing implementation relies heavily on the <strong>Paint</strong> stage.</p>\n<p>The main idea was <strong>not reinventing the wheel</strong>. Blink already provides paint timing detection for the implementation of Large Contentful Paint (LCP) and Element Timing. However, this is targeted for <em>specific</em> nodes (an image, a text block). In Container Timing we care about <em>subtrees</em>.</p>\n<p>So, when a paint is detected, we need to quickly decide whether the paint  is relevant to Container Timing.</p>\n<h2 id=\"is-a-paint-interesting-for-container-timing\" tabindex=\"-1\">Is a paint interesting for Container Timing? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>As the DOM tree is built (on parsing, or because of a script), we check the value of the attribute <code>containertiming</code> for each <code>Element</code>. When found, we flag that element and all its descendants with the flag <code>SelfOrAncestorHasContainerTiming</code>.</p>\n<p>We also have the attribute <code>containertiming-ignore</code>. When found, we will stop the propagation.</p>\n<p>So, later, for any paint, we will immediately know if the paint should be tracked for Container Timing or not. This minimizes the impact when the element is not tracked.</p>\n<div class=\"markdown-alert markdown-alert-important\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-report mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M0 1.75C0 .784.784 0 1.75 0h12.5C15.216 0 16 .784 16 1.75v9.5A1.75 1.75 0 0 1 14.25 13H8.06l-2.573 2.573A1.458 1.458 0 0 1 3 14.543V13H1.75A1.75 1.75 0 0 1 0 11.25Zm1.75-.25a.25.25 0 0 0-.25.25v9.5c0 .138.112.25.25.25h2a.75.75 0 0 1 .75.75v2.19l2.72-2.72a.749.749 0 0 1 .53-.22h6.5a.25.25 0 0 0 .25-.25v-9.5a.25.25 0 0 0-.25-.25Zm7 2.25v2.5a.75.75 0 0 1-1.5 0v-2.5a.75.75 0 0 1 1.5 0ZM9 9a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z\"></path></svg>What about DOM tree updates after parsing?</p><p>This is a pain point for performance. When a DOM element starts/stops having the <code>containertiming</code> or <code>containertiming-ignore</code> attribute after the DOM tree is created, we need to traverse the tree to update the flag.</p>\n</div>\n<h2 id=\"collecting-paint-updates\" tabindex=\"-1\">Collecting Paint Updates <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>When a paint is detected, we just reuse the existing implementation in the <code>ImagePaintTimingDetector</code> and <code>TextPaintTimingDetector</code>, that are also used for LCP and Element Timing for the relevant elements.</p>\n<div class=\"markdown-alert markdown-alert-note\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-info mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8Zm8-6.5a6.5 6.5 0 1 0 0 13 6.5 6.5 0 0 0 0-13ZM6.5 7.75A.75.75 0 0 1 7.25 7h1a.75.75 0 0 1 .75.75v2.75h.25a.75.75 0 0 1 0 1.5h-2a.75.75 0 0 1 0-1.5h.25v-2h-.25a.75.75 0 0 1-.75-.75ZM8 6a1 1 0 1 1 0-2 1 1 0 0 1 0 2Z\"></path></svg>Note</p><p>Only text and image paints are currently tracked. Video, canvas, and SVG are not yet supported.</p>\n</div>\n<p>We first determine if the paint should be recorded for Container Timing. And this is fast because of the <code>SelfOrAncestorHasContainerTiming</code> flag.</p>\n<p>The timing detectors give us the area of the visual rectangle, the bounding box on screen that was painted.</p>\n<p>For Container Timing, we added a mechanism to walk up the DOM tree from the painted node. If we encounter an ancestor that is marked with the <code>containertiming</code> attribute (a <strong>container timing root</strong>), we report that paint event to it.</p>\n<p>This “bubbling up” of paint events is illustrated in the diagram below.</p>\n<p><img src=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/images/propagation-to-ancestor.png\" alt=\"Within the Blink rendering pipeline, paint events from individual text and image nodes are captured by the paint timing detectors and then &quot;bubble up&quot; to their ancestor container, allowing for subtree-level aggregation.\" class=\"dark-invert\"></p>\n<div class=\"markdown-alert markdown-alert-tip\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-light-bulb mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z\"></path></svg>Is this expensive?</p><p>It depends on the depth of the hierarchy from the node to the most remote ancestor. Further work will be needed to speed up or avoid these traversals.</p>\n</div>\n<h2 id=\"aggregating-regions\" tabindex=\"-1\">Aggregating Regions <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>One of the most interesting challenges was determining the <code>size</code> of the container. It is not just the size of the <em>container timing root</em>. It is the <strong>union of all painted content</strong>.</p>\n<p>Two reasons for this:</p>\n<ul>\n<li>Being able to incrementally determine the updated area, in a way that is inspired by Largest Contentful Paint.</li>\n<li>To reduce the amount of performance events generated, we <strong>discard</strong> the paints that do not increase the area.</li>\n</ul>\n<p>We maintain a <code>PaintedRegion</code> for each container. This is a non-overlapping union of the rectangles that cover the updated area:</p>\n<ol>\n<li><strong>Initial Paint:</strong> When the first child paints, we initialize the region with its visual rectangle.</li>\n<li><strong>Subsequent Paints:</strong> As more images load or text renders, we perform a union operation: <code>CurrentRegion = Union(CurrentRegion, NewPaintRect)</code>.</li>\n</ol>\n<p>So, as paints are detected, each container will aggregate the parts of the screen that have been painted by all their children.</p>\n<p>We use <a href=\"https://chromium.googlesource.com/chromium/src/+/HEAD/cc/base/region.h\"><code>cc::Region</code></a>, based on <a href=\"https://api.skia.org/classSkRegion.html\"><code>SkRegion</code></a> from the Skia graphics library to handle these unions efficiently.</p>\n<p>The following diagram shows this process in action over three frames.</p>\n<p><img src=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/images/painted-regions.png\" alt=\"The  of a container is the union of the painted areas of its children. As new content paints, the region grows to encompass all visible parts of the container's subtree.\" class=\"dark-invert\"></p>\n<h2 id=\"buffering-and-reporting\" tabindex=\"-1\">Buffering and Reporting <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>Because a container paints over multiple frames (e.g., text renders first, then a background image, then a lazy-loaded icon), we cannot just emit one entry. We generate <strong>candidates</strong>.</p>\n<p>For each container, when a paint that increases the painted region is detected, we schedule a new event. Right at the end of the frame presentation, we package the current state into a new performance timeline entry: a <code>PerformanceContainerTiming</code> object.</p>\n<p>This object contains:</p>\n<ul>\n<li><code>startTime</code>: The presentation time of the paint. In the Chromium implementation, this is set to the moment the frame was presented to the user, and matches <code>presentationTime</code> from <code>PaintTimingMixin</code>.</li>\n<li><code>firstRenderTime</code>: the time of the first paint we detected in the container. Useful for getting a hint of how long a component has been showing updates to the user.</li>\n<li>The container element, in two ways. The <code>identifier</code> is the value of the <code>containertiming</code> attribute. <code>rootElement</code> is the actual element.</li>\n<li><code>size</code>: The total area of the aggregated <code>PaintedRegion</code>.</li>\n<li><code>lastPaintedElement</code>: the last element that triggered a paint — handy for debugging which child caused the latest candidate.</li>\n</ul>\n<div class=\"markdown-alert markdown-alert-note\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-info mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M0 8a8 8 0 1 1 16 0A8 8 0 0 1 0 8Zm8-6.5a6.5 6.5 0 1 0 0 13 6.5 6.5 0 0 0 0-13ZM6.5 7.75A.75.75 0 0 1 7.25 7h1a.75.75 0 0 1 .75.75v2.75h.25a.75.75 0 0 1 0 1.5h-2a.75.75 0 0 1 0-1.5h.25v-2h-.25a.75.75 0 0 1-.75-.75ZM8 6a1 1 0 1 1 0-2 1 1 0 0 1 0 2Z\"></path></svg>Note</p><p>We support the <a href=\"https://www.w3.org/TR/paint-timing/#the-paint-timing-mixin\"><code>PaintTimingMixin</code></a>, which adds <code>paintTime</code> (when the paint was committed to the compositor) and <code>presentationTime</code> (when the frame was presented to the user). In Chromium, <code>startTime</code> is set to <code>presentationTime</code>.</p>\n</div>\n<p>This design means the observer might receive multiple entries for the same container. This is intentional: it lets developers pick the milestone that matters to them, typically the point where <code>size</code> stops growing.</p>\n<h2 id=\"handling-ignore\" tabindex=\"-1\">Handling “Ignore” <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>We also implemented the <code>containertiming-ignore</code> attribute. When a node has this attribute, it stops the <code>SelfOrAncestorHasContainerTiming</code> flag from propagating further down its subtree, so paints within it are not walked up to the container timing root, and never contribute to that container <code>PaintedRegion</code>.</p>\n<p><strong>Ignoring</strong> is useful for a number of things:</p>\n<ul>\n<li>Debug overlays and instrumentation widgets, which should not inflate the measured painted area.</li>\n<li>Visually independent nested components: child dialogs or overlays that paint independently from the container and would affect the size metric if included.</li>\n</ul>\n<div class=\"markdown-alert markdown-alert-tip\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-light-bulb mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z\"></path></svg>Tip</p><p><code>containertiming-ignore</code> on large untracked subtrees also reduces traversal depth, helping with the cost mentioned above.</p>\n</div>\n<h2 id=\"how-to-test\" tabindex=\"-1\">How to test <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>With flag propagation, region aggregation, candidate buffering, and selective ignoring all in place, the implementation is complete.</p>\n<p><strong>Container Timing</strong> is <a href=\"https://groups.google.com/a/chromium.org/g/blink-dev/c/FnM3lweVssM/m/eVhhCtG5AQAJ\">ready for test</a> in Chromium. Just use the Blink feature flag <code>ContainerTiming</code>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">chrome --enable-blink-features<span class=\"token operator\">=</span>ContainerTiming</code></pre>\n<h2 id=\"what-s-next\" tabindex=\"-1\">What’s next? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<ul>\n<li>We are preparing an Origin Trial in Chromium, a new step towards enabling Container Timing by default. Stay tuned!</li>\n<li>Optimizations in the traversal. We have some ideas for avoiding the traversal of the full tree when a paint is detected, to find the container timing root.</li>\n<li>Support for detecting paints in other parts of the tree. Shadow DOM is specially interesting here due to its importance in web components.</li>\n</ul>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>Building this native implementation was a great exercise in reusing Blink’s existing performance infrastructure while extending it to support subtree-level aggregation.</p>\n<p>The key insight: subtree-level metrics didn’t require a new paint tracking system. Only a way to aggregate and bubble up what Blink was already measuring.</p>\n<p>The result is a native, low-overhead API for measuring the rendering performance of entire components.</p>\n<h2 id=\"thanks\" tabindex=\"-1\">Thanks! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<p>This  has been done as part of the collaboration between <a href=\"https://techatbloomberg.com\">Bloomberg</a> and <a href=\"https://www.igalia.com\">Igalia</a>. Thanks!</p>\n<p><a href=\"https://www.igalia.com\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/igalia-logo-white-text.svg\">\n<img src=\"https://blogs.igalia.com/dape/img/igalia_-_500px_-_RGB_-_Feb23-580x210.png\" alt=\"Igalia\">\n</picture></a> <a href=\"https://techatbloomberg.com\"><img src=\"https://blogs.igalia.com/dape/img/Bloomberg-logo-580x117.png\" alt=\"Bloomberg\" class=\"dark-invert\"></a></p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/03/26/the-implementation-of-container-timing-aggregating-paints-in-blink/\">#</a></h2>\n<ul>\n<li>Implementation of SelfOrAncestorHasContainerTiming: <a href=\"https://chromium.googlesource.com/chromium/src/+/ccebd1fb24b76dc2594e66b6fbad6c1192107405/third_party/blink/renderer/core/dom/node.h#1153\">1</a>, <a href=\"https://chromium.googlesource.com/chromium/src/+/ccebd1fb24b76dc2594e66b6fbad6c1192107405/third_party/blink/renderer/core/html/html_element.cc#3777\">2</a>.</li>\n<li><a href=\"https://chromium.googlesource.com/chromium/src/+/ccebd1fb24b76dc2594e66b6fbad6c1192107405/third_party/blink/renderer/core/paint/timing/container_timing.cc\">ContainerTiming aggregation at container_timing.cc</a>.</li>\n<li>Paint detection in <a href=\"https://chromium.googlesource.com/chromium/src/+/ccebd1fb24b76dc2594e66b6fbad6c1192107405/third_party/blink/renderer/core/paint/timing/text_paint_timing_detector.cc\">text</a> and <a href=\"https://chromium.googlesource.com/chromium/src/+/ccebd1fb24b76dc2594e66b6fbad6c1192107405/third_party/blink/renderer/core/paint/timing/image_paint_timing_detector.cc\">image</a> paint timing detectors.</li>\n<li><a href=\"https://wicg.github.io/container-timing/\">Specification draft @ WICG</a>.</li>\n<li>Related specifications: <a href=\"https://w3c.github.io/element-timing/\">Element Timing</a>, <a href=\"https://www.w3.org/TR/largest-contentful-paint/\">Largest Contentful Paint</a>, <a href=\"https://www.w3.org/TR/paint-timing/#the-paint-timing-mixin\">Paint Timing Mixin</a>.</li>\n<li><a href=\"https://github.com/WICG/container-timing\">Explainer</a>.</li>\n<li><a href=\"https://github.com/WICG/container-timing/issues/14\">Shadow DOM Handling discussion</a>.</li>\n<li><a href=\"https://issues.chromium.org/489959278\">Optimizing hierarchy traversals</a>.</li>\n<li><a href=\"http://bit.ly/lifeofapixel\">Life of a Pixel</a> presentation about the Blink rendering pipeline by Steve Kobes.</li>\n</ul>\n",
			"date_published": "2026-03-26T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/",
			"url": "https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/",
			"title": "Container Timing: measuring web components performance",
			"content_html": "<p>Over the last year, as part of the collaboration between <a href=\"https://www.igalia.com\">Igalia</a> and <a href=\"https://www.techatbloomberg.com/\">Bloomberg</a> to improve web performance observability, I worked on a new web performance API: <strong>Container Timing</strong>. This standard aims to make component-level performance measurement as easy as page-level metrics like LCP and FCP.</p>\n<p>My focus has been writing the native implementation in Chromium, which is now available behind a feature flag.</p>\n<p>In this post, I will explain why this API is needed, how it works, and how you can experiment with it today. In a follow-up post, I will dive deep into the implementation details within the Blink rendering engine.</p>\n<h2 id=\"the-problem-measuring-component-performance\" tabindex=\"-1\">The problem: measuring component performance <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>We currently use <a href=\"https://web.dev/articles/lcp\">Largest Contentful Paint (LCP)</a> and <a href=\"https://web.dev/articles/fcp\">First Contentful Paint (FCP)</a> to measure web page loading performance. Both metrics are page-scoped, meaning they evaluate the user perceived load speed for full page.</p>\n<p>The <a href=\"https://w3c.github.io/element-timing/\">Element Timing API</a> shifts the focus to individual DOM elements. By targetting specific elements, like hero images or a headers, we can measure their specific rendering performance independent of the rest of the page.</p>\n<p>However, modern web development is component-based. Developers build complex widgets (as grids, charts, feeds or panels) that are made of many elements. It is not trivial to understand the performance of those components:</p>\n<ul>\n<li>LCP may not be useful as another large image painting could delay it.</li>\n<li>Measuring a web component with Element Timing may require instrumenting all the significant elements one by one.</li>\n</ul>\n<p><img src=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/images/container_timing_problem.png\" alt=\"A representation of a news web page, where the scope of LCP is the full web page, and Element Timing is a specific element, but we want to measure the latest news feed widget.\" class=\"dark-invert\"></p>\n<h2 id=\"the-solution-container-timing\" tabindex=\"-1\">The solution: Container Timing <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>This is where <strong>Container Timing</strong> comes in! With the new specification, a web developer can mark subtrees of the DOM as “containers”. Then, it provides performance entries aggregating the painting time of that subtree.</p>\n<p><img src=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/images/container_timing_solution.png\" alt=\"A representation of a news web page, where aggregating the paints of the children of the news feed widget allows to know when its painting has finished.\" class=\"dark-invert\"></p>\n<p>This way, we can answer: “when did a specific component finish painting its content?”.</p>\n<p>Some examples:</p>\n<ul>\n<li><strong>Breaking down the contributors to the initial page load</strong>: with <strong>Container Timing</strong> we can focus on the components that are more relevant to the user experience.</li>\n<li><strong>Single page application navigation</strong>: when a soft navigation shows a new component on the screen, we can obtain painting information for it.</li>\n<li><strong>Lazy-loaded components</strong>: Tracking when a widget that loads below the fold is fully visible.</li>\n<li><strong>Third-party content</strong>: Monitoring the performance of ads or embedded widgets.</li>\n</ul>\n<p>You just need to add, to the top element of the subtree, the new attribute <code>containertiming</code>. When you add it to an HTML element, the browser will track all the painting updates of that element and its descendants.</p>\n<p>What happens under the hood? The browser will start monitoring the rendering pipeline for paints that contribute to representing the subtree. When a new frame is painted, if that paints new areas for that subtree, it reports a performance entry showing the increase in painted area. It is similar to LCP, but for a specific subtree!</p>\n<h2 id=\"how-to-use-container-timing\" tabindex=\"-1\">How to use Container Timing? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>Using the API is straightforward. First, mark the containers you want to track in HTML:</p>\n<pre class=\"language-html\" tabindex=\"0\"><code class=\"language-html\"><span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>div</span> <span class=\"token attr-name\">id</span><span class=\"token attr-value\"><span class=\"token punctuation attr-equals\">=</span><span class=\"token punctuation\">\"</span>my-widget<span class=\"token punctuation\">\"</span></span> <span class=\"token attr-name\">containertiming</span><span class=\"token attr-value\"><span class=\"token punctuation attr-equals\">=</span><span class=\"token punctuation\">\"</span>widget-load<span class=\"token punctuation\">\"</span></span><span class=\"token punctuation\">></span></span><br>  <span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>img</span> <span class=\"token attr-name\">src</span><span class=\"token attr-value\"><span class=\"token punctuation attr-equals\">=</span><span class=\"token punctuation\">\"</span>graph.png<span class=\"token punctuation\">\"</span></span> <span class=\"token punctuation\">/></span></span><br>  <span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;</span>p</span><span class=\"token punctuation\">></span></span>Loading data...<span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>p</span><span class=\"token punctuation\">></span></span><br><span class=\"token tag\"><span class=\"token tag\"><span class=\"token punctuation\">&lt;/</span>div</span><span class=\"token punctuation\">></span></span></code></pre>\n<p>Then, use a <code>PerformanceObserver</code> to listen for container entries:</p>\n<pre class=\"language-javascript\" tabindex=\"0\"><code class=\"language-javascript\"><span class=\"token keyword\">const</span> observer <span class=\"token operator\">=</span> <span class=\"token keyword\">new</span> <span class=\"token class-name\">PerformanceObserver</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token parameter\">list</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">=></span> <span class=\"token punctuation\">{</span><br>  list<span class=\"token punctuation\">.</span><span class=\"token function\">getEntries</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">.</span><span class=\"token function\">forEach</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token parameter\">entry</span><span class=\"token punctuation\">)</span> <span class=\"token operator\">=></span> <span class=\"token punctuation\">{</span><br>    console<span class=\"token punctuation\">.</span><span class=\"token function\">log</span><span class=\"token punctuation\">(</span><span class=\"token template-string\"><span class=\"token template-punctuation string\">`</span><span class=\"token string\">Container '</span><span class=\"token interpolation\"><span class=\"token interpolation-punctuation punctuation\">${</span>entry<span class=\"token punctuation\">.</span>identifier<span class=\"token interpolation-punctuation punctuation\">}</span></span><span class=\"token string\">' painted.</span><span class=\"token template-punctuation string\">`</span></span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br>    console<span class=\"token punctuation\">.</span><span class=\"token function\">log</span><span class=\"token punctuation\">(</span><span class=\"token template-string\"><span class=\"token template-punctuation string\">`</span><span class=\"token string\">Time: </span><span class=\"token interpolation\"><span class=\"token interpolation-punctuation punctuation\">${</span>entry<span class=\"token punctuation\">.</span>startTime<span class=\"token interpolation-punctuation punctuation\">}</span></span><span class=\"token template-punctuation string\">`</span></span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br>    console<span class=\"token punctuation\">.</span><span class=\"token function\">log</span><span class=\"token punctuation\">(</span><span class=\"token template-string\"><span class=\"token template-punctuation string\">`</span><span class=\"token string\">Size: </span><span class=\"token interpolation\"><span class=\"token interpolation-punctuation punctuation\">${</span>entry<span class=\"token punctuation\">.</span>size<span class=\"token interpolation-punctuation punctuation\">}</span></span><span class=\"token template-punctuation string\">`</span></span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span> <span class=\"token comment\">// The area painted</span><br>  <span class=\"token punctuation\">}</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br><span class=\"token punctuation\">}</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br><br>observer<span class=\"token punctuation\">.</span><span class=\"token function\">observe</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">{</span> <span class=\"token literal-property property\">type</span><span class=\"token operator\">:</span> <span class=\"token string\">\"container\"</span><span class=\"token punctuation\">,</span> <span class=\"token literal-property property\">buffered</span><span class=\"token operator\">:</span> <span class=\"token boolean\">true</span> <span class=\"token punctuation\">}</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<p>When the web contents load, new Performance entries will be emitted with the container updates.</p>\n<p>Which entry will be interesting? The API lets you choose what best fits your needs! Some ideas:</p>\n<ul>\n<li>The most important entry could be the last one: the one that increased the painted area for the last time. Something similar to LCP.</li>\n<li>Or maybe the last one that contributed a significant size increase?</li>\n<li>Or the last one before a user interaction?</li>\n</ul>\n<h2 id=\"a-native-implementation-for-chromium\" tabindex=\"-1\">A native implementation for Chromium <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>In the initial steps of the specification, Jason Williams wrote a <a href=\"https://github.com/bloomberg/container-timing/tree/main/polyfill\">polyfill</a> that worked on top of Element Timing. This was very useful to understand and polish the kind of information the specification could provide. However, this had its own performance impact.</p>\n<div class=\"markdown-alert markdown-alert-warning\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-alert mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M6.457 1.047c.659-1.234 2.427-1.234 3.086 0l6.082 11.378A1.75 1.75 0 0 1 14.082 15H1.918a1.75 1.75 0 0 1-1.543-2.575Zm1.763.707a.25.25 0 0 0-.44 0L1.698 13.132a.25.25 0 0 0 .22.368h12.164a.25.25 0 0 0 .22-.368Zm.53 3.996v2.5a.75.75 0 0 1-1.5 0v-2.5a.75.75 0 0 1 1.5 0ZM9 11a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z\"></path></svg>Deprecation Notice:</p><p>The polyfill is now deprecated and no longer maintained, as the native API cannot be fully replicated using Element Timing. Please use the native implementation for accurate results.</p>\n</div>\n<p>So I started a native implementation in Chromium. The main idea was working on top of the already existing implementation for Element Timing, and add the remaining bits.</p>\n<p>In my next blog post I will go through the implementation details. But, for this post, it is relevant to state that the goals of this native implementation were:</p>\n<ul>\n<li>Minimizing the overhead. It should be almost zero when elements are not interesting to <strong>Container Timing</strong>, and very fast and light when paints were relevant.</li>\n<li>It should reuse as much as possible of the already existing logic for Element Timing.</li>\n</ul>\n<p>The native implementation has landed and is available in Chromium144+, but still behind the <code>ContainerTiming</code> feature flag.</p>\n<p>You can experiment with this feature locally by passing the following flag to Chromium at startup:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">chrome --enable-blink-features<span class=\"token operator\">=</span>ContainerTiming</code></pre>\n<p>Or you can just enable the “Experimental Web Platform features” in <code>chrome://flags</code>.</p>\n<h2 id=\"upcoming-trials\" tabindex=\"-1\">Upcoming trials <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>So now, it is time to collect feedback from the actual web developers.</p>\n<p>We have already presented the specification in several conferences (as <a href=\"https://www.igalia.com/downloads/slides/josedapenapaz-containertiming.pdf\">BlinkOn 20</a> or <a href=\"https://perfnow.nl/2024/\">Performance.now() 2024</a>). And discussions are ongoing in the <a href=\"https://www.w3.org/webperf/\">Web Performance Working Group</a>.</p>\n<p>We just <a href=\"https://groups.google.com/a/chromium.org/g/blink-dev/c/FnM3lweVssM/m/eVhhCtG5AQAJ\">announced the Dev Trial in the blink-dev mailing list</a>! The feature is now officially ready for testing.</p>\n<p>What’s next? We are also preparing an Origin Trial, that will allow developers to test the specification in production for a subset of their users.</p>\n<p>If you want to provide feedback, we are collecting it in the explainer <a href=\"https://github.com/bloomberg/container-timing/issues\">ticket tracker</a>.</p>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>With Container Timing, you will be able to measure paintings at the web component level, filling a significant gap in the web performance monitoring landscape.</p>\n<p>If you struggled with finding out the ready time of your widgets, just try it! It is available, under the feature flags <code>ContainerTiming</code>, in Chromium Stable today.</p>\n<p>And stay tuned! In a follow up post, I will go through the native implementation details in Chromium.</p>\n<h2 id=\"thanks\" tabindex=\"-1\">Thanks! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<p>This  has been done as part of the collaboration between <a href=\"https://techatbloomberg.com\">Bloomberg</a> and <a href=\"https://www.igalia.com\">Igalia</a>. Thanks!</p>\n<p><a href=\"https://www.igalia.com\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/igalia-logo-white-text.svg\">\n<img src=\"https://blogs.igalia.com/dape/img/igalia_-_500px_-_RGB_-_Feb23-580x210.png\" alt=\"Igalia\">\n</picture></a> <a href=\"https://techatbloomberg.com\"><img src=\"https://blogs.igalia.com/dape/img/Bloomberg-logo-580x117.png\" alt=\"Bloomberg\" class=\"dark-invert\"></a></p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2026/02/10/container-timing-measuring-web-components-performance/\">#</a></h2>\n<ul>\n<li><a href=\"https://github.com/bloomberg/container-timing\">Container Timing explainer</a></li>\n<li><a href=\"https://bloomberg.github.io/container-timing/\">Container Timing specification draft</a></li>\n<li><a href=\"https://github.com/bloomberg/container-timing/issues\">Ticket tracker for specification discussion</a></li>\n<li><a href=\"https://chromestatus.com/feature/5110962817073152\">Chrome status feature: Container Timing</a></li>\n<li><a href=\"https://groups.google.com/a/chromium.org/g/blink-dev/c/FnM3lweVssM/m/eVhhCtG5AQAJ\">Container Timing ready for testing announcement in blink-dev</a></li>\n<li><a href=\"https://issues.chromium.org/382422286\">Container Timing native implementation Chromium issue</a></li>\n</ul>\n",
			"date_published": "2026-02-10T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/",
			"url": "https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/",
			"title": "Maintaining Chromium downstream: how can upstream help?",
			"content_html": "<p>As I write often, maintaining a downstream of Chromium is not easy. A lot of effort falls on the shoulders of the teams embedding Chromium, or creating products on top of the upstream Chromium project.</p>\n<p>We covered this in the previous chapters of my <a href=\"https://blogs.igalia.com/dape/tags/downstream-maintenance/\">series of blog posts about maintaining Chromium downstreams</a>. Now, this post is going to be a bit different.</p>\n<p>I start with a question:</p>\n<blockquote>\n<p>What can upstream Chromium do to help downstreams?</p>\n</blockquote>\n<p>This very same question was discussed in the <a href=\"https://webengineshackfest.org/\">Web Engines Hackfest</a> breakout session that originated most of these posts. In this blog post, I will share some of the most interesting answers that came up in that session.</p>\n<h2 id=\"better-componentization\" tabindex=\"-1\">Better componentization <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>One of the ideas was to move code around more aggressively to make it easier to reuse. Specifically, refactoring to move more and more code from <code>//chrome</code> to <code>//components</code>.</p>\n<p>Chromium has gone a long way in that direction. Each of these changes allows downstreams to directly use only the components they need, instead of working on top of <code>//chrome</code>. But there is still room for improvement.</p>\n<p>Some parts of <code>//chrome</code> are still not refactored and could be very useful, especially for downstreams shipping a browser. Some examples:</p>\n<ul>\n<li>Tabs implementation.</li>\n<li>Profiles.</li>\n<li>Synchronization.</li>\n</ul>\n<h2 id=\"improve-extensibility\" tabindex=\"-1\">Improve extensibility <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>In the same direction, supporting easier ways to provide alternative implementations, and add custom software components, was considered important.</p>\n<p>Some examples:</p>\n<ul>\n<li>Making it easier to support Chrome extensions without using <code>//chrome</code>, would allow implementing new browsers without bundling the Chromium UI.</li>\n<li>Going further in the direction of what has been done with <a href=\"https://chromium.googlesource.com/chromium/src/+/lkgr/docs/ozone_overview.md\">Ozone</a>: the Chromium platform abstraction layer that helps to implement the support for a variety of OS (including Linux and X11). Similar steps could be taken at other levels to improve OS integration (system hardware encryption, accelerated video codecs, system IPC, and so on).</li>\n</ul>\n<h2 id=\"downstream-advocacy\" tabindex=\"-1\">Downstream advocacy <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>A very interesting proposal was to create the role of downstream advocates in the Chrome community.</p>\n<p>They would act as an entry point for downstream projects wanting to interact with the Chrome community and be an official communication channel for downstreams to report their needs.</p>\n<p>This would also increase awareness of the different ways Chromium is used by downstreams.</p>\n<p>Today there are two channels that are somewhat similar: the <a href=\"https://groups.google.com/a/chromium.org/g/embedder-dev\"><em>Chromium Embedders</em> mailing list</a> and the <code>#embedders</code> Slack channel.</p>\n<h2 id=\"a-two-way-problem\" tabindex=\"-1\">A two-way problem <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>So far, three different problems raised by downstreams have been covered, and they seem like fair requests to the Chromium community.</p>\n<p>But there is also work to do on the downstreams side.</p>\n<p>Can downstreams contribute more of their work to upstream? Not only in code, but also in all the maintenance activities.</p>\n<p>There is also code written for very specific downstream needs that could land upstream, as long as it does not become a burden to the common project. That means ownership and enough work bandwidth need to be in place.</p>\n<h2 id=\"where-are-we-now\" tabindex=\"-1\">Where are we now? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>There is a major change in the Chromium community: the creation of the <a href=\"https://www.linuxfoundation.org/supporters-of-chromium-based-browsers\">Supporters of Chromium Based Browsers</a>. What does it mean for embedders? Could it be a good way to channel requirements from the different downstream projects?</p>\n<p>Two years after the Web Engines Hackfest session, we can see some improvements. But the general question is still valid:</p>\n<blockquote>\n<p>What can upstream Chromium do to help downstreams?</p>\n</blockquote>\n<h2 id=\"the-last-post\" tabindex=\"-1\">The last post <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<p>The next post in this series will be the last one. It will cover some typical problems downstream projects are facing today.</p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/12/11/maintaining-chromium-downstream-how-can-upstream-help/\">#</a></h2>\n<ul>\n<li><a href=\"https://webengineshackfest.org/2023/\">Web Engines Hackfest 2023</a> - <a href=\"https://github.com/Igalia/webengineshackfest/issues/9\">Maintaining Chromium downstream breakout session</a>.</li>\n<li><a href=\"https://www.linuxfoundation.org/supporters-of-chromium-based-browsers\">Supporters of Chromium-Based Browsers</a>.</li>\n</ul>\n",
			"date_published": "2025-12-11T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/",
			"url": "https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/",
			"title": "trace-chrome: easy remote tracing of Chromium",
			"content_html": "<!--\n\nI should likely follow a structure similar to the XTP document.\n\nA list of topics to cover:\n- FIrst: a TL;DR to go to how to use trace-chrome tool\n- Then going through why it is important to trace remotely\n- Tools offered by Chromium for that: chrome://inspect\n- trace-chrome: what it is\n- trace-chrome: how to use it\n\n-->\n<p>As part of my performance analysis work for LGE webOS, I often had to capture Chrome Traces from an embedded device. So, to make it convenient, I wrote a simple command line helper to obtain the traces remotely, named <a href=\"https://github.com/jdapena/trace-chrome\">trace-chrome</a>.</p>\n<p>In this blog post will explain why it is useful, and how to use it.</p>\n<h3 id=\"tl-dr\" tabindex=\"-1\">TL;DR <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h3>\n<p>If you want to read directly about the tool, jump to the <a href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">how to use section</a>.</p>\n<h2 id=\"tracing-chromium-remotely\" tabindex=\"-1\">Tracing Chromium remotely <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<p>Chromium provides an infrastructure for capturing static tracing data, based on <a href=\"https://perfetto.dev/\">Perfetto</a>. In this blog post I am not going through its architecture or implementation, but focus on how we instruct a trace capture to start and stop, and how to then fetch the results.</p>\n<p>Chrome/Chromium provides user interfaces for capturing and analyzing traces locally. This can be done opening a tab and pointing it to the <code>chrome://tracing</code> URL.</p>\n<p>The tracing capture UI is quite powerful, and completely implemented in web. This has a downside, though: running the capture UI introduces a significant overhead in several resources (CPU, memory, GPU, …).</p>\n<p>This overhead may be even more significant when tracing Chromium or any other Chromium based web runtime in an embedded device, where we have CPU, storage and memory constraints.</p>\n<p>Chromium does a great work at minimizing the overhead, by postponing the trace processing as much as possible, and providing a minimal UI when the capture is ongoing. But it may still be too much.</p>\n<p>How to avoid this problem?</p>\n<ul>\n<li>Capturing UI should not run in the system we are tracing. We can run the UI in a different computer to capture the trace.</li>\n<li>Same about storage, we want it to happen in a different computer.</li>\n</ul>\n<p>The solution for both is tracing remotely. Both the user interface for controlling the recording, and the recording storage happen in a different computer.</p>\n<h2 id=\"setting-up-remote-tracing-support\" tabindex=\"-1\">Setting up remote tracing support <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<p>First, some nomenclature I will use:</p>\n<ul>\n<li><em>Target device</em>: it is the one that runs the Chromium web runtime instance we are going to trace.</li>\n<li><em>Host device</em>: the device that will run the tracing UI, to configure, start and stop the recording, and to explore the tracing results.</li>\n</ul>\n<p>OK, now we know we want to trace remotely the <em>target</em> device Chromium instance. How can we do that? First, we need to connect our tracing tools running in the <em>host</em> to the Chromium instance in the <em>target</em> device.</p>\n<p>This is done using the remote debugging port: a multi-purpose HTTP port provided by Chromium. This port is used not only for tracing, it offers access to Chrome Developer Tools.</p>\n<p>The Chromium remote debugging port is disabled by default, but it can be enabled using the command line switch <code>--remote-debugging-port=PORT</code> in the <em>target</em> Chromium instance. This will open an HTTP port in the <code>localhost</code> interface, that can be used to connect.</p>\n<p>Why <code>localhost</code>? Because this interface does not provide any authentication or encryption. So it is unsafe. It is user responsibility to provide some security (i.e. by using an setting an SSH tunnel between the <em>host</em> and the <em>target</em> device to connect to the remote debugging port).</p>\n<h2 id=\"capturing-traces-with-chrome-inspect\" tabindex=\"-1\">Capturing traces with chrome://inspect <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<p>Chromium browser provides a solution for tracing remotely. Just opening the URL <code>chrome://inspect</code> in the <em>host</em> device. It provides this user interface:</p>\n<p><img src=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/images/chrome-inspect-1.png\" alt=\"\"></p>\n<p>First, the checkbox for <em>Discover network targets</em> needs to be set.</p>\n<p>Then press the <em>Configure…</em> button to set the list of IP addressed and ports where we expect <em>target</em> remote debugging ports to be.</p>\n<p>Do not forget to add to the list the end point that is accessible from the <em>host</em> device. I.e. in the case of an SSH tunnel from the <em>host</em> device to the <em>target</em> device port, it needs to be the <em>host</em> side of the tunnel.</p>\n<p>For the case we set up the <em>host</em> side tunnel at the port <code>10101</code>, we will see this:</p>\n<p><img src=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/images/chrome-inspect-2.png\" alt=\"\"></p>\n<p>Then, just pressing the <em>trace</em> link will show the Chromium tracing UI, but connected to the <em>target</em> device Chromium instance.</p>\n<h2 id=\"capturing-traces-with-trace-chrome\" tabindex=\"-1\">Capturing traces with <code>trace-chrome</code> <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<p>Over the last 8 years, I have been involved quite often in exploring the performance of Chromium in embedded devices. Specifically for the LGE webOS web stack. In this problem space, Chromium tracing capabilities are handy, providing a developers oriented view of different metrics, including the time spent running known operations in specific threads.</p>\n<p>At that time I did not know about <code>chrome://inspect</code> so I really did not have an easy way to collect Chromium traces from a different machine. This is important as one performance analysis principle is that collecting the information should be as lightweight as possible. Running the tracing UI in the same Chromium instance that is analyzed is against that principle.</p>\n<p>The solution? I wrote a very simple NodeJS script, that allows to capture a Chromium trace from the command line.</p>\n<p>This is convenient for several reasons:</p>\n<ul>\n<li>No need to launch the full tracing UI.</li>\n<li>As we completely detach that UI from the capturing step, without an additional step to record the trace to a file, we are not affected on the unstability of the tracing UI handling the captured trace (not a problem usually, but it happens).</li>\n<li>Easier to repeat tests for specific tracing categories, instead of manually enabling them in the tracing UI.</li>\n</ul>\n<p>The script just provides an easy to use command line interface to the already existing <a href=\"https://www.npmjs.com/package/chrome-remote-interface\"><code>chrome-remote-interface</code></a> NodeJS module.</p>\n<p>The project is open source, and available at <a href=\"https://github.com/jdapena/trace-chrome\">github.com/jdapena/trace-chrome</a>.</p>\n<h3 id=\"how-to-use-trace-chrome\" tabindex=\"-1\">How to use <code>trace-chrome</code> <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h3>\n<p>Now, the instructions to use <code>trace-chrome</code>. The tool depends on having a working NodeJS environment in the <em>host</em>.</p>\n<h4 id=\"installation\" tabindex=\"-1\">Installation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>First, clone the Github repository in the <em>host</em>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> clone github.com:jdapena/trace-chrome</code></pre>\n<p>Then, install the dependencies. To do this, you need to have a working NodeJS environment.</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token builtin class-name\">cd</span> trace-chrome<br><span class=\"token function\">npm</span> <span class=\"token function\">install</span></code></pre>\n<h4 id=\"running\" tabindex=\"-1\">Running <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>Now it is possible to try the tool. To get the command line help just run:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">$ bin/trace-chrome <span class=\"token parameter variable\">--help</span><br>Usage: trace-chrome <span class=\"token punctuation\">[</span>options<span class=\"token punctuation\">]</span><br><br>Options:<br>  -H, <span class=\"token parameter variable\">--host</span> <span class=\"token operator\">&lt;</span>host<span class=\"token operator\">></span>                        Remote debugging protocol <span class=\"token function\">host</span> <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"localhost\"</span><span class=\"token punctuation\">)</span><br>  -p, <span class=\"token parameter variable\">--port</span> <span class=\"token operator\">&lt;</span>port<span class=\"token operator\">></span>                        Remote debugging protocool port <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"9876\"</span><span class=\"token punctuation\">)</span><br>  -s, <span class=\"token parameter variable\">--showcategories</span>                     Show categories<br>  -O, <span class=\"token parameter variable\">--output</span> <span class=\"token operator\">&lt;</span>path<span class=\"token operator\">></span>                      Output <span class=\"token function\">file</span> <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"\"</span><span class=\"token punctuation\">)</span><br>  -c, <span class=\"token parameter variable\">--categories</span> <span class=\"token operator\">&lt;</span>categories<span class=\"token operator\">></span>            Set categories <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"\"</span><span class=\"token punctuation\">)</span><br>  -e, <span class=\"token parameter variable\">--excludecategories</span> <span class=\"token operator\">&lt;</span>categories<span class=\"token operator\">></span>     Exclude categories <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"\"</span><span class=\"token punctuation\">)</span><br>  <span class=\"token parameter variable\">--systrace</span>                               Enable systrace<br>  <span class=\"token parameter variable\">--memory_dump_mode</span> <span class=\"token operator\">&lt;</span>mode<span class=\"token operator\">></span>                Memory dump mode <span class=\"token punctuation\">(</span>default: <span class=\"token string\">\"\"</span><span class=\"token punctuation\">)</span><br>  <span class=\"token parameter variable\">--memory_dump_interval</span> <span class=\"token operator\">&lt;</span>interval_in_ms<span class=\"token operator\">></span>  Memory dump interval <span class=\"token keyword\">in</span> ms <span class=\"token punctuation\">(</span>default: <span class=\"token number\">2000</span><span class=\"token punctuation\">)</span><br>  <span class=\"token parameter variable\">--dump_memory_at_stop</span><br>  -h, <span class=\"token parameter variable\">--help</span>                               display <span class=\"token builtin class-name\">help</span> <span class=\"token keyword\">for</span> <span class=\"token builtin class-name\">command</span></code></pre>\n<p>To connect to a running Chromium instance remote debugging port, the <code>--host</code> and <code>--port</code> parameters need to be used. In the examples I am going to use the port <code>9999</code> and the host <code>localhost</code>.</p>\n<div class=\"markdown-alert markdown-alert-warning\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-alert mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M6.457 1.047c.659-1.234 2.427-1.234 3.086 0l6.082 11.378A1.75 1.75 0 0 1 14.082 15H1.918a1.75 1.75 0 0 1-1.543-2.575Zm1.763.707a.25.25 0 0 0-.44 0L1.698 13.132a.25.25 0 0 0 .22.368h12.164a.25.25 0 0 0 .22-.368Zm.53 3.996v2.5a.75.75 0 0 1-1.5 0v-2.5a.75.75 0 0 1 1.5 0ZM9 11a1 1 0 1 1-2 0 1 1 0 0 1 2 0Z\"></path></svg>Warning</p><p>Note that, in this case, the parameter <code>--host</code> refers to the network address of the remote debugging port access point. It is not referring to the <em>host</em> machine where we run the script.</p>\n</div>\n<h4 id=\"getting-the-tracing-categories\" tabindex=\"-1\">Getting the tracing categories <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>First, to check which tracing categories are available, we can use the option <code>--showcategories</code>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">bin/trace-chrome <span class=\"token parameter variable\">--host</span> localhost <span class=\"token parameter variable\">--port</span> <span class=\"token number\">9999</span> <span class=\"token parameter variable\">--showcategories</span></code></pre>\n<p>We will obtain a list like this:</p>\n<pre><code>AccountFetcherService\nBlob\nCacheStorage\nCalculators\nCameraStream\n...\n</code></pre>\n<h4 id=\"recording-a-session\" tabindex=\"-1\">Recording a session <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>Now, the most important step: recording a Chromium trace. To do this, we will provide a list of categories (parameter <code>--categories</code>), and a file path to record the trace (parameter <code>--output</code>):</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">bin/trace-chrome <span class=\"token parameter variable\">--host</span> localhost <span class=\"token parameter variable\">--port</span> <span class=\"token number\">9999</span> <span class=\"token punctuation\">\\</span><br>  <span class=\"token parameter variable\">--categories</span> <span class=\"token string\">\"blink,cc,gpu,renderer.scheduler,sequence_manager,v8,toplevel,viz\"</span> <span class=\"token punctuation\">\\</span><br>  <span class=\"token parameter variable\">--output</span> js_and_rendering.json</code></pre>\n<p>This will start recording. To stop recording, just press <code>&lt;Ctrl&gt;+C</code>, and the trace will be transferred and stored to the provided file path.</p>\n<div class=\"markdown-alert markdown-alert-tip\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-light-bulb mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z\"></path></svg>Tip</p><p>Which categories to use? Good presets for certain problem scopes can be obtained in Chrome. Just open <code>chrome://tracing</code>, press the <em>Record</em> button, and play with the predefined settings. In the bottom you will see the list of categories to pass for each of them.</p>\n</div>\n<h4 id=\"opening-the-trace-file\" tabindex=\"-1\">Opening the trace file <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>Now the tracing file has been obtained, it can be opened from Chrome or Chromium running in <em>host</em>: load in a tab the URL <code>chrome://tracing</code> and press the <strong>Load</strong> button.</p>\n<p><img src=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/images/chrome-tracing-load.png\" alt=\"\"></p>\n<div class=\"markdown-alert markdown-alert-tip\"><p class=\"markdown-alert-title\"><svg class=\"octicon octicon-light-bulb mr-2\" viewBox=\"0 0 16 16\" version=\"1.1\" width=\"16\" height=\"16\" aria-hidden=\"true\"><path d=\"M8 1.5c-2.363 0-4 1.69-4 3.75 0 .984.424 1.625.984 2.304l.214.253c.223.264.47.556.673.848.284.411.537.896.621 1.49a.75.75 0 0 1-1.484.211c-.04-.282-.163-.547-.37-.847a8.456 8.456 0 0 0-.542-.68c-.084-.1-.173-.205-.268-.32C3.201 7.75 2.5 6.766 2.5 5.25 2.5 2.31 4.863 0 8 0s5.5 2.31 5.5 5.25c0 1.516-.701 2.5-1.328 3.259-.095.115-.184.22-.268.319-.207.245-.383.453-.541.681-.208.3-.33.565-.37.847a.751.751 0 0 1-1.485-.212c.084-.593.337-1.078.621-1.489.203-.292.45-.584.673-.848.075-.088.147-.173.213-.253.561-.679.985-1.32.985-2.304 0-2.06-1.637-3.75-4-3.75ZM5.75 12h4.5a.75.75 0 0 1 0 1.5h-4.5a.75.75 0 0 1 0-1.5ZM6 15.25a.75.75 0 0 1 .75-.75h2.5a.75.75 0 0 1 0 1.5h-2.5a.75.75 0 0 1-.75-.75Z\"></path></svg>Tip</p><p>The traces are completely standalone. So they can be loaded in any other computer without any additional artifact. This is useful, as those traces can be shared among developers or uploaded to a ticket tracker.</p>\n<p>But, if you want to do that, do not forget to compress first with <code>gzip</code> to make the trace smaller. <code>chrome://tracing</code> can open the compressed traces directly.</p>\n</div>\n<h4 id=\"capturing-memory-infra-dumps\" tabindex=\"-1\">Capturing memory infra dumps <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>The script also supports periodical recording of the <code>memory-infra</code> system. This captures periodical dumps of the state of memory, with specific instrumentation in several categories.</p>\n<p>To use it, add the category <code>disabled-by-default-memory-infra</code>, and pass the following parameters to configure the capture:</p>\n<ul>\n<li><code>--memory_dump_mode &lt;background|detailed|simple&gt;</code>: level of detail. <code>background</code> is designed to have almost no impact in execution, running very fast. <code>light</code> mode shows a few entries, while <code>detailed</code> is unlimited, and provides the most complete information.</li>\n<li><code>--memory_dump_interval</code>: the interval in miliseconds between snapshots.</li>\n</ul>\n<h4 id=\"using-npx\" tabindex=\"-1\">Using npx <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<p>For convenience, it is also possible to use <code>trace-chrome</code> with <code>npx</code>. It will install the script and the dependencies in the NPM cache, and run from them:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">npx jdapena/trace-chrome <span class=\"token parameter variable\">--help</span></code></pre>\n<h4 id=\"examples\" tabindex=\"-1\">Examples <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h4>\n<ol>\n<li>Record a trace of the categories for the <em>Web Developer</em> mode in Chrome Tracing UI:</li>\n</ol>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">bin/trace-chrome <span class=\"token parameter variable\">--host</span> HOST <span class=\"token parameter variable\">--port</span> PORT <span class=\"token punctuation\">\\</span> <br> <span class=\"token parameter variable\">--categories</span> <span class=\"token string\">\"blink,cc,netlog,renderer.scheduler,sequence_manager,toplevel,v8\"</span> <span class=\"token punctuation\">\\</span><br> <span class=\"token parameter variable\">--output</span> web_developer.json</code></pre>\n<ol start=\"2\">\n<li>Record memory infrastructure snapshots every 10 seconds:</li>\n</ol>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">bin/trace-chrome <span class=\"token parameter variable\">--host</span> HOST <span class=\"token parameter variable\">--port</span> PORT <span class=\"token punctuation\">\\</span><br> <span class=\"token parameter variable\">--categories</span> <span class=\"token string\">\"disabled-by-default-memory-infra\"</span> <span class=\"token parameter variable\">--memory_dump_mode</span> detailed <span class=\"token punctuation\">\\</span><br> <span class=\"token parameter variable\">--memory_dump_interval</span> <span class=\"token number\">10000</span> <span class=\"token parameter variable\">--output</span> memory_infra.json</code></pre>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<p><code>trace-chrome</code> is a very simple tool, just providing a convenient command line interface for interacting with remote Chromium instances. It is specially useful for tracing embedded devices.</p>\n<p>It has been useful for me for years, in a number of platforms, from Windows to Linux, from desktop to low end devices.</p>\n<p>Try it!</p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/03/25/trace-chrome-easy-remote-tracing-of-chromium/\">#</a></h2>\n<ul>\n<li><a href=\"https://github.com/jdapena/trace-chrome\">trace-chrome in GitHub</a>.</li>\n<li><a href=\"https://www.npmjs.com/package/@jdapena/trace-chrome\">trace-chrome in the NPM registry</a>.</li>\n<li><a href=\"https://www.chromium.org/developers/how-tos/trace-event-profiling-tool/\">The Trace Event Profiling Tool</a>.</li>\n<li><a href=\"https://perfetto.dev/\">Perfetto</a> tool web page.</li>\n<li><a href=\"https://perfetto.dev/docs/\">Perfetto documentation</a>.</li>\n</ul>\n",
			"date_published": "2025-03-25T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/",
			"url": "https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/",
			"title": "Maintaining Chromium downstream: keeping it small",
			"content_html": "<p>Maintaining a downstream of Chromium is hard, because of the speed upstream moves. and how hard it is to keep our downstream up to date.</p>\n<p>A critical aspect is how big what we build on top of Chromium is: in other words, the size of our downstream. In this blog post I will review how to measure it, and the impact it has on the costs of maintaining a downstream.</p>\n<h2 id=\"maintaining-chromium-downstream-series\" tabindex=\"-1\">Maintaining Chromium downstream series <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<p>Last year, I started a series of blog posts about the challenges, the organization and the implementation details of maintaining a project that is a downstream of <a href=\"https://www.chromium.org/Home/\">Chromium</a>. This is the third blog post in the series.</p>\n<p>The previous posts were:</p>\n<ul>\n<li><a href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">Why downstream?</a>: why is it needed to create downstream forks of Chromium? And why using Chromium in particular?</li>\n<li><a href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">Update strategies</a>: when to update? Is it better to merge or rebase? How can automation help?</li>\n</ul>\n<h2 id=\"measuring-the-size-of-a-downstream\" tabindex=\"-1\">Measuring the size of a downstream <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<p>But, first… What do I mean by the <em>size</em> of the downstream? I am interested in a definition that can be used as a metric, something we can measure and track. A number that allows to know if the downstream is increasing or decreasing, measure if a change has impact on it.</p>\n<p>The rough idea is: the bigger the downstream is, the more complex it is to maintain it. I will provide a few metrics that can be used for this purpose.</p>\n<h4 id=\"delta\" tabindex=\"-1\">Delta <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h4>\n<p>The most obvious metric is the <em>delta</em>, the difference between upstream Chromium and the downstream. For this, and assuming the downstream uses Git, the definition I use is essentially the result of this command:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> <span class=\"token function\">diff</span> <span class=\"token parameter variable\">--shortstat</span> BASELINE<span class=\"token punctuation\">..</span>DOWNSTREAM</code></pre>\n<p><code>BASELINE</code> is a commit reference that represents the pure upstream repository status our downstream is based on (our <em>baseline</em>). <code>DOWNSTREAM</code> is the commit reference we want to compare the <em>baseline</em> to.</p>\n<p>As a recommendation, it is useful to maintain in our downstream repository tags or branches that represent strictly the baseline. This way we can use diff tools to represent our delta more easily.</p>\n<p>This command is going to return 3 values:</p>\n<ul>\n<li>The number of files that have changed.</li>\n<li>The number of lines that were added.</li>\n<li>The number of lines that were removed.</li>\n</ul>\n<p>We will be mostly interested in tracking the number of lines added and removed.</p>\n<p>This definition is interesting as it gives an idea of the amount of lines of code that we need to maintain. It may not reflect the full amount to maintain, as some files are maintained out of the Chromium repository. Aggregating these with other repositories changed or added to the build could be useful.</p>\n<p>One interesting thing with this approach is also that we can measure the delta of specific paths in the repository. I.e. if we want to measure the <em>delta</em> of the <code>content/</code> path, it is just as easy as doing:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> <span class=\"token function\">diff</span> <span class=\"token parameter variable\">--shortstat</span> BASELINE<span class=\"token punctuation\">..</span>DOWNSTREAM content/</code></pre>\n<h4 id=\"modifying-delta\" tabindex=\"-1\">Modifying delta <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h4>\n<p>The regular <em>delta</em> definition we considered has a problem. All the line changes have the same weight. But, when we update our baseline, a big part of the complexity comes from the conflicts found when rebasing or merging.</p>\n<p>So, I am introducing a new definition. <em>Modifying delta</em>: the changes between the baseline and the downstream that affect upstream lines. In this case, we ignore completely any file added only by the downstream, as that is not going to create conflicts.</p>\n<p>In Git, we can use <a href=\"https://git-scm.com/docs/git-diff#Documentation/git-diff.txt---diff-filterACDMRTUXB82308203\">filters</a> for that purpose:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> <span class=\"token function\">diff</span> --diff-filter<span class=\"token operator\">=</span>CDMR <span class=\"token parameter variable\">--shortstat</span> BASELINE<span class=\"token punctuation\">..</span>DOWNSTREAM</code></pre>\n<p>This will only account these changes:</p>\n<ul>\n<li><code>M</code>: changes affecting existing files.</li>\n<li><code>R</code>: files that were renamed.</li>\n<li><code>C</code>: files that were copied.</li>\n<li><code>D</code>: files that were deleted.</li>\n</ul>\n<p>So, these numbers are going to more accurately represent which parts of our <em>delta</em> can conflict with the changes coming from upstream when we rebase or merge.</p>\n<p>Tracking the modifying delta,  and reorganizing the project to reduce it, is usually a good strategy for reducing maintenance costs.</p>\n<h4 id=\"diffstat\" tabindex=\"-1\">diffstat <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h4>\n<p>An issue we have with the Git diff stats is that it represents modified lines as a block of lines removed and another of lines added.</p>\n<p>Fortunately, we can use another tool. <a href=\"https://invisible-island.net/diffstat/\">Diffstat</a>, will do a best effort to identify which lines are actually modified. It can be easily installed in your distribution of choice (i.e. the package <code>diffstat</code> in Debian/Ubuntu/Redhat).</p>\n<p>This behavior is enabled with the parameter <code>-m</code>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> <span class=\"token function\">diff</span> <span class=\"token punctuation\">..</span>.parameters<span class=\"token punctuation\">..</span>. <span class=\"token operator\">|</span> diffstat <span class=\"token parameter variable\">-m</span></code></pre>\n<p>This is the kind of output that is generated. On top of the typical <code>+</code> and <code>-</code> we see the <code>!</code> for the lines that have been detected to be modified.</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">$ <span class=\"token function\">git</span> show <span class=\"token operator\">|</span> diffstat <span class=\"token parameter variable\">-m</span><br> paint/timing/container_timing.cc        <span class=\"token operator\">|</span>    <span class=\"token number\">5</span> ++++<span class=\"token operator\">!</span><br> paint/timing/container_timing.h         <span class=\"token operator\">|</span>    <span class=\"token number\">1</span> +<br> timing/performance_container_timing.cc  <span class=\"token operator\">|</span>   <span class=\"token number\">20</span> ++++++++++++++++++<span class=\"token operator\">!</span><span class=\"token operator\">!</span><br> timing/performance_container_timing.h   <span class=\"token operator\">|</span>    <span class=\"token number\">5</span> +++++<br> timing/performance_container_timing.idl <span class=\"token operator\">|</span>    <span class=\"token number\">1</span> +<br> timing/window_performance.cc            <span class=\"token operator\">|</span>    <span class=\"token number\">4</span> ++<span class=\"token operator\">!</span><span class=\"token operator\">!</span><br> timing/window_performance.h             <span class=\"token operator\">|</span>    <span class=\"token number\">1</span> +<br> <span class=\"token number\">7</span> files changed, <span class=\"token number\">32</span> insertions<span class=\"token punctuation\">(</span>+<span class=\"token punctuation\">)</span>, <span class=\"token number\">5</span> modifications<span class=\"token punctuation\">(</span><span class=\"token operator\">!</span><span class=\"token punctuation\">)</span></code></pre>\n<p>Coloring is also available, with the parameter <code>-C</code>.</p>\n<p>Using <code>diffstat</code> gives a more accurate insight of both the <em>total delta</em> and the <em>modifying delta</em>.</p>\n<h4 id=\"tracking-deltas\" tabindex=\"-1\">Tracking deltas <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h4>\n<p>Now we have the tools to provide numbers, we can track them in the time to know if our downstream is growing or shrinking.</p>\n<p>That can be used also for measuring the impact of different strategies or changes in the downstream maintenance complexity.</p>\n<h4 id=\"other-metric-ideas\" tabindex=\"-1\">Other metric ideas <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h4>\n<p>But deltas are not the only tool to measure the complexity, specially regarding the effort maintaining a downstream.</p>\n<p>I can enumerate just a few ideas that provide insight of different problems:</p>\n<ul>\n<li>Frequency of rebase/merge conflicts per path.</li>\n<li>Frequency of undetected build issues.</li>\n<li>Frequency and complexity of the regressions, weighed by the size of the patches fixing them.</li>\n</ul>\n<h2 id=\"relevant-changes-for-tracking-a-downstream\" tabindex=\"-1\">Relevant changes for tracking a downstream <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<p>Let’s focus now on other factors, not always measurable easily, when we maintain a downstream project.</p>\n<h2 id=\"what-we-build-on-top-of-chromium\" tabindex=\"-1\">What we build on top of Chromium <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<p>The complexity of a downstream, specially the one measured by regular delta, is impacted heavily by what is built on top of Chromium.</p>\n<p>A full web browser is usually bigger, because it includes the required user experience, and many components that conform what we nowadays consider a browser. History, bookmarks, user profiles, secrets management…</p>\n<p>An application runtime for hybrid applications may just have minimal wrappers for integrating a web view, but then maybe a complex set of components for\neasing the integration with a native toolkit or a specific programming language.</p>\n<p>How much you build on top of Chrome?</p>\n<ul>\n<li>Browser are usually bigger pieces than runtimes.</li>\n<li>Hybrid application runtimes may have a big part related to toolkit or other components.</li>\n</ul>\n<h3 id=\"what-we-depend-on\" tabindex=\"-1\">What we depend on <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h3>\n<p>For maintenance complexity, as important as what we build on top, is the set of boundaries and dependencies:</p>\n<ul>\n<li>How many upstream components are we using?</li>\n<li>What kind of APIs are provided?</li>\n<li>Are they stable or changing often?</li>\n</ul>\n<p>These questions are specially relevant, as Chromium does not really provide any warranty about the stability, or even availability, of existing components.</p>\n<p>Though, different layers provided by Chromium change less often than others. Some examples:</p>\n<ul>\n<li>The <em>Content API</em> provides the basics of the web platform and Chromium process model, so it is quite useful for hybrid application runtimes. It has been changing last years, in part because of the [Onion Soup refactorings] (<a href=\"https://mariospr.org/category/blink-onion-soup/\">https://mariospr.org/category/blink-onion-soup/</a>). Though, there are always examples of how to adapt to those changes in <code>//content/shell</code> and <code>//chrome</code>.</li>\n<li>Chromium provides a number of reusable components at <code>//components</code> that may be useful for different downstreams.</li>\n<li>Then, for building a full web browser, it may be tempting to directly use <code>//chrome</code>, and modify it for the specific downstream user experience. This means a higher modifying delta. But, as the upstream Chrome browser UI may also often changes heavily, the frequency of conflicts also increases.</li>\n</ul>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<p>In this post I reviewed different ways to measure the downstream size, and how what we build impacts the complexity of maintenance.</p>\n<p>Understanding and tracking our downstream allows to implement strategies to keep things under control. It also allows to better understand the cost of a specific feature or an implementation approach.</p>\n<p>In the next post in this series, I will write about how the upstream Chromium community helps the downstreams.</p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2025/02/04/maintaining-chromium-downstream-keeping-it-small/\">#</a></h2>\n<ul>\n<li><a href=\"https://invisible-island.net/diffstat/\">Diffstat</a></li>\n<li><a href=\"https://git-scm.com/docs/git-diff#Documentation/git-diff.txt-code--diff-filterACDMRTUXBcode\">Git diff filters</a></li>\n<li><a href=\"https://chromium.googlesource.com/chromium/src/+/HEAD/content/public/README.md\">Content API</a></li>\n<li><a href=\"https://mariospr.org/category/blink-onion-soup/\">Chromium now migrated to the new C++ Mojo types</a>.</li>\n</ul>\n",
			"date_published": "2025-02-04T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/",
			"url": "https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/",
			"title": "Maintaining Chromium downstream: update strategies",
			"content_html": "<p>This is the second of a series of blog posts I am publishing for sharing some considerations about the challenges of maintaining a downstream of Chromium.</p>\n<p>The first part, <a href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">Maintaining downstreams of Chromium: why downstreams?</a>, provided initial definitions, and an analysis of why someone would need to maintain a downstream.</p>\n<p>In this post I will focus on the possible different strategies for tracking the <a href=\"https://chromium.org/\">Chromium</a> upstream project changes, and integrating the downstream changes.</p>\n<h2 id=\"applying-changes\" tabindex=\"-1\">Applying changes <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h2>\n<p>The first problem is related to the fact that our downstream will include additional changes on top of upstream (otherwise we would not need a downstream, right?).</p>\n<p>There are two different approaches for this, based on the <a href=\"https://git-scm.com/\">Git</a> strategy used: rebase vs. merge.</p>\n<h4 id=\"merge-strategy\" tabindex=\"-1\">Merge strategy <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>This consists on maintaining an upstream branch and periodically merging its changes to the downstream branch.</p>\n<p>In the case of Chromium, the size of the repository and the amount and frequency of the changes is really big, so the chances that merging changes cause any conflict are higher than in other smaller projects.</p>\n<h4 id=\"rebase-strategy\" tabindex=\"-1\">Rebase strategy <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>This consists on maintaining the downstream changes as a series of patches, that are applied on top of an upstream baseline.</p>\n<p>Updating means changing the baseline and applying all the patches. As this is done, not all patches will cause a conflict. And, for the ones that do, the complexity is far smaller.</p>\n<h4 id=\"tip-keep-the-downstream-as-small-as-possible\" tabindex=\"-1\">Tip: keep the downstream as small as possible <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>Don’t hesitate to remove features that are more prone to cause conflicts. And at least, think carefully, for each change, if it can be done in a way that matches better upstream codebase.</p>\n<p>And, if some of your changes are good for the commons, just contribute them to upstream!</p>\n<h2 id=\"when-to-update\" tabindex=\"-1\">When to update? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h2>\n<p>A critical aspect of applying upstream changes is how often to do that.</p>\n<p>The size and structure of the team involved is highly dependent on this decision. And of course, planning the resources.</p>\n<p>Usually this needs to be as much predictable as possible, and bound to the upstream release cycle.</p>\n<p>Some examples used in downstream projects:</p>\n<ul>\n<li>\n<p>Weekly or daily tracking.</p>\n</li>\n<li>\n<p>Major releases.</p>\n</li>\n</ul>\n<h4 id=\"upstream-release-policy-and-rebases\" tabindex=\"-1\">Upstream release policy and rebases <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>It is not only important how often you track upstream repository, but also what you track.</p>\n<p>Chromium, nowadays, follows this procedure for each major release (first number in the version):</p>\n<ul>\n<li>\n<p>Main development happens in <code>main</code> branch. Several releases are tagged daily. And, mostly every week, one of them is released to the <code>dev</code> channel for users to try (with additional testing).</p>\n</li>\n<li>\n<p>At a point, an stabilization branch is created, entering <code>beta</code> stage. The quality bar for what is landed in this branch is raised. Only stabilization work is done. More releases per day are tagged, and again, approximately once per week one is released to <code>beta</code> channel.</p>\n</li>\n<li>\n<p>When planned, the branch becomes <code>stable</code>. This means it is ready for wide user adoption, so the <code>stable</code> channel will pick its releases from this branch.</p>\n</li>\n</ul>\n<p>That means <code>main</code> (development branch) targets version <code>N</code>, then <code>N-1</code> branch is the <code>beta</code> (or <em>stabilization</em> branch), and <code>N-2</code> is the <code>stable</code> branch. Nowadays Chromium targets publishing a new major <code>stable</code> version every <strong>four</strong> weeks (and that also means a release spends 4 weeks in <code>beta</code> channel).</p>\n<p>You can read the <a href=\"https://chromium.googlesource.com/chromium/src/+/master/docs/process/release_cycle.md\">official release cycle documentation</a> if you want to know more, including the <code>extended stable</code> (or long term support) channel.</p>\n<p>Some downstreams will track upstream <code>main</code> branch. Some others will just start applying the downstream changes when first release lands the <code>stable</code> channel. And some may just start when <code>main</code> is branched and stabilization work begins.</p>\n<h4 id=\"tip-update-often\" tabindex=\"-1\">Tip: update often <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>The more you update, the smaller the change set you need to consider. And that means reducing the complexity of the changes.</p>\n<p>This is specially important when merging instead of rebasing, as applying the new changes is done in one step.</p>\n<p>This could be hard, though, depending on the size of the downstream. And, from time to time some refactors upstream will still imply non-trivial work to adapt the downstream changes.</p>\n<h4 id=\"tip-update-not-so-often\" tabindex=\"-1\">Tip: update NOT so often <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>OK, OK. I just told the opposite, right?</p>\n<p>Now, think of a complex change upstream, that implies many intermediate steps, with a moving architecture that evolves as it is developed (something not unfrequent in Chromium). Updating often means you will need to adapt your downstream to all the intermediate steps.</p>\n<p>That definitely could mean more work.</p>\n<h4 id=\"a-nice-strategy-for-matching-upstream-stable-releases\" tabindex=\"-1\">A nice strategy for matching upstream stable releases <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>If you want to publish an <code>stable</code> release almost the same day or week of the official upstream <code>stable</code> release, and your downstream changes are not trivial, then a good moment for starting applying changes is when the stabilization branch happens.</p>\n<p>But the first days there is a lot of stabilization work happening, as the quality bar is raised. So… waiting a few days after the <code>beta</code> branch happens could be a good idea.</p>\n<p>An idea: just wait for the second version published in <code>beta</code> channel (the first one happens right after branching). That should give still three full weeks before the version is promoted to the <code>stable</code> channel.</p>\n<h4 id=\"tracking-main-branch-automate-everything\" tabindex=\"-1\">Tracking main branch: automate everything! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h4>\n<p>In case you want to follow upstream <code>main</code> branch, in a daily basis (or even per commit), then it is just not practical to do that manually.</p>\n<p>The solution for that is automation, at different levels:</p>\n<ul>\n<li>\n<p>Applying the changes.</p>\n</li>\n<li>\n<p>Resolving the conflicts.</p>\n</li>\n<li>\n<p>And, more important, testing the result is working.</p>\n</li>\n</ul>\n<p>Good automation is, though, expensive. It requires computing power for building Chromium often, and run tests. But, increasing test coverage will detect issues earlier, and give an extra margin for resolving them.</p>\n<p>In any case, no matter your update strategy is, automation will always be helpful.</p>\n<h2 id=\"next\" tabindex=\"-1\">Next <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h2>\n<p>This is the end of the post. So, what comes next?</p>\n<p>In the next post in this series, I will talk about the downstream size problem, and different approaches for keeping it under control.</p>\n<h2 id=\"references\" tabindex=\"-1\">References <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/09/13/maintaining-chromium-downstream-update-strategies/\">#</a></h2>\n<ul>\n<li>\n<p><a href=\"https://chromium.googlesource.com/chromium/src/+/master/docs/process/release_cycle.md\">Chromium release cycle</a></p>\n</li>\n<li>\n<p><a href=\"https://git-scm.com/book/en/v2/Git-Branching-Basic-Branching-and-Merging\">Git book: Git Branching - Basic Branching and Merging</a></p>\n</li>\n<li>\n<p><a href=\"https://git-scm.com/book/en/v2/Git-Branching-Rebasing\">Git book: Git Branching - Rebasing</a></p>\n</li>\n</ul>\n",
			"date_published": "2024-09-13T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/",
			"url": "https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/",
			"title": "Maintaining downstreams of Chromium: why downstream?",
			"content_html": "<p><a href=\"https://www.chromium.org/Home/\">Chromium</a>, the web browser open source project <a href=\"https://www.google.com/intl/en_us/chrome/\">Google Chrome</a> is based on, can be considered nowadays the reference implementation of the web platform. As such, it is the first choice when implementing the web platform in a software platform or product.</p>\n<p>Why is it like this? In this blog post I am going to introduce the topic, and then review the different reasons why a downstream Chromium is used in different projects.</p>\n<h2 id=\"a-series-of-blog-posts\" tabindex=\"-1\">A series of blog posts <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h2>\n<p>This is the first of a series of blog posts, where I am going through several aspects and challenges of maintaining a downstream project using Chromium as its upstream project.</p>\n<p>They will be mostly based on the discussions in two events. First, on <a href=\"https://github.com/Igalia/webengineshackfest/issues/9\">The Web Engines Hackfest 2023 break out session</a> with same title that I chaired in A Coruña. Then, on my <a href=\"https://youtu.be/N47g8V9y7pc\">BlinkOn 18 talk</a> in November 2023, at Sunnyvale.</p>\n<h2 id=\"some-definitions\" tabindex=\"-1\">Some definitions <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h2>\n<p>Before starting the discussion of the different aspects, let’s clarify how I will use several terms.</p>\n<h4 id=\"repository-vs-project\" tabindex=\"-1\">Repository vs. project <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h4>\n<p>I am going to refer to a repository as a version controlled storage of code strictly. Then, a project (specifically, a software project) is the community of people that share goals and some kind of organization, to maintain one or several software products.</p>\n<p>So, a project may use several repositories for their goals.</p>\n<p>In this discussion I will talk about Chromium, an open source project that targets the implementation of the web platform user agent, a web browser, for different platforms. As such, it uses a number of repositories (<code>src</code>, <code>v8</code> and more).</p>\n<h4 id=\"downstream-and-upstream\" tabindex=\"-1\">Downstream and upstream <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h4>\n<p>I will use the downstream and upstream terms, referred to the relationship of different software projects version control repositories.</p>\n<p>If there is a software project repository (typically open source), and a new repository is created that contains all or part of the original repository, then:</p>\n<ul>\n<li>Upstream project will be the original repository.</li>\n<li>Downstream project is the new repository.</li>\n</ul>\n<p>It is important to highlight that different things can happen to the downstream repository:</p>\n<ul>\n<li>This copy could be a one time event, so the downstream repository becomes an independent fork, and there may be no interest in tracking the upstream evolution. This happens often for abandoned repositories, where a different set of people start an independent project. But there could be other reasons.</li>\n<li>There is the intent to track the upstream repository changes. So the downstream repository evolves as the upstream repository does too, but with some specific differences maintained on top of the original repository.</li>\n</ul>\n<h2 id=\"why-using-chromium\" tabindex=\"-1\">Why using Chromium? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h2>\n<p>Nowadays, web platform is a solid alternative for providing contents to the users. It allows modern user interfaces, based on well known standards, and integrate well with local and remote services. The gap between native applications and web contents has been reduced, so it is a good alternative quite often.</p>\n<p>But, when integrating web contents, product integrators need an implementation of the web platform. It is no surprise that Chromium is the most used, for a number of reasons:</p>\n<ul>\n<li>It is open source, with a license that allows to adapt it for new product needs.</li>\n<li>It is well maintained and up to date. Even pushing through standardization for improving it continuously.</li>\n<li>It is secure, both from architecture and maintenance model point of view.</li>\n<li>It provides integration points to tailor the implementation to ones needs.</li>\n<li>It supports the most popular software platforms (Windows, Android, Linux, …) for integrating new products.</li>\n<li>On top of the web platform itself, it provides an implementation for many of the components required to build a modern web browser.</li>\n</ul>\n<p>Still, there are other good alternate choices for integrating the web, as <a href=\"https://webkit.org/\">WebKit</a> (specially <a href=\"https://wpewebkit.org/\">WPE</a> for the embedded use cases), or using the system-provided web components (Android or iOS web view, …).</p>\n<p>Though, in this blog post I will focus on the Chromium case.</p>\n<h2 id=\"why-downstreaming-chromium\" tabindex=\"-1\">Why downstreaming Chromium? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h2>\n<p>But, why do different projects need to use downstream Chromium repositories?</p>\n<p>The main simple reason the project needs a downstream repository is adding changes that are not upstream. This can be for a variety of reasons:</p>\n<ul>\n<li>Downstream changes that are not allowed by upstream. I.e. because they will make upstream project harder to maintain, or it will not be tested often.</li>\n<li>Downstream changes that downstream project does not want to add to upstream.</li>\n</ul>\n<p>Let’s see some examples of changes of both types.</p>\n<h3 id=\"hardware-and-os-adaptation\" tabindex=\"-1\">Hardware and OS adaptation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h3>\n<p>This is when downstream adds support for a hardware target or OS that is not originally supported in upstream Chromium project.</p>\n<p>Chromium upstream provides an abstraction layer for that purpose named <a href=\"https://chromium.googlesource.com/chromium/src/+/HEAD/docs/ozone_overview.md\">Ozone</a>, that allows to adapt it to the OS, desktop environment, and system graphics compositor. But there are other abstraction layers for media acceleration, accessibility or input methods.</p>\n<p>The <a href=\"https://chromium.googlesource.com/chromium/src.git/+/refs/heads/main/ui/ozone/platform/wayland/\">Wayland protocol adaptation</a> started as a downstream effort, as upstream Chromium did not intend to support Wayland at that time. Eventually it evolved into an upstream official Ozone backend.</p>\n<p>An example? <a href=\"https://www.webosose.org/\">LGE webOS</a> Chromium port.</p>\n<h3 id=\"differentiation\" tabindex=\"-1\">Differentiation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h3>\n<p>The previous case mostly forces to have a downstream project or repository. But there are also some cases where this is intended. There is the will to have some features in the downstream repository and not in upstream, an intended differentiation.</p>\n<p>Why would anybody want that? Some typical examples:</p>\n<ul>\n<li>A set of features that the downstream project owners consider to make the project better in some way, and want them to be kept downstream. This can happen when a new browser is shipped, and it contains features that make the product offering different, and, in some ways, better, than upstream Chrome. That can be a different user experience, some security features, better privacy…</li>\n<li>Adaptation to a different product brand. Each browser or browser-based product will want to have its specific brand instead of upstream Chromium brand.</li>\n</ul>\n<p>Examples of this:</p>\n<ul>\n<li><a href=\"https://brave.com/\">Brave browser</a>, with completely different privacy and security choices.</li>\n<li><a href=\"https://arc.net/\">ARC browser</a>, with an innovative user experience.</li>\n<li><a href=\"https://www.microsoft.com/en-us/edge\">Microsoft Edge</a>, with tight Windows OS integration and corporate features.</li>\n</ul>\n<h3 id=\"hybrid-application-runtimes\" tabindex=\"-1\">Hybrid application runtimes <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h3>\n<p>And one last interesting case: integrating the web platform for developing hybrid applications: those that mix parts of the user interface implemented in a native toolkit, and parts implemented using the web platform.</p>\n<p>Though Chromium includes official support for hybrid applications for Android, with the Android Web View, other toolkits provide also web applications support, and the integration of those belong, in Chromium case, to downstream projects.</p>\n<p>Examples?</p>\n<ul>\n<li><a href=\"https://doc.qt.io/qt-6/qtwebengine-overview.html\">Qt Web Engine</a>.</li>\n<li><a href=\"https://bitbucket.org/chromiumembedded/cef/\">CEF</a>.</li>\n</ul>\n<h2 id=\"what-s-next\" tabindex=\"-1\">What’s next? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2024/03/05/maintaining-downstreams-of-chromium-why-downstream/\">#</a></h2>\n<p>In this blog post I presented different reasons why projects end up maintaining a downstream fork of Chromium.</p>\n<p>In the next blog post I will present one of the main challenges when maintaining a downstream of Chromium: the different rebase and upgrade strategies.</p>\n",
			"date_published": "2024-03-05T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/",
			"url": "https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/",
			"title": "V8 profiling instrumentation overhead",
			"content_html": "<p>In the last 2 years, I have been contributing to improve the performance of V8 profiling instrumentation in different scenarios, both for Linux (using <a href=\"https://perf.wiki.kernel.org/index.php/Main_Page\">Perf</a>) and for Windows (using <a href=\"https://learn.microsoft.com/en-us/windows-hardware/test/wpt/\">Windows Performance Toolkit</a>).</p>\n<p>My work has been mostly improving the stack walk instrumentation that allows Javascript generated code to show in the stack traces, referring to the original code, in the same way native compiled code in C or C++ can provide that information.</p>\n<p>In this blog post I run different benchmarks to determine how much overhead this instrumentation introduces.</p>\n<h2 id=\"how-can-you-enable-profiling-instrumentation\" tabindex=\"-1\">How can you enable profiling instrumentation? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>V8 implements system native profile instrumentation for the Javascript code, Both for Linux Perf and for Windows Performance Toolkit. Though, those are disabled by default.</p>\n<p>For Windows, the command line switch <code>--enable-etw-stack-walking</code> instruments the generated Javascript code, to emit the source code information for the compiled functions using ETW (<a href=\"https://learn.microsoft.com/en-us/windows-hardware/test/wpt/event-tracing-for-windows\">Event Tracing for Windows</a>). This gives a better insight of time spent in different functions, specially when stack profiling is enabled in Windows Performance Toolkit.</p>\n<p>For Linux Perf, there are several command line switches: - <code>--perf-basic-prof</code>: this emits, for any Javascript generated code, its memory location and a descriptive name (that includes the source location of the function). - <code>--perf-basic-prof-only-functions</code>: same as the previous one, but only emitting functions, excluding other stubs as regular expressions. - <code>--perf-prof</code>: this is a different way to provide the information for Linux Perf. It generates a more complex format specified <a href=\"https://github.com/torvalds/linux/blob/master/tools/perf/Documentation/jitdump-specification.txt\">here</a>. On top of the addresses of the functions, it also includes source code information, even with details about the exact line of code inside the function that is executed in a sample. - <code>--perf-prof-annotate-wasm</code>: this one extends <code>--perf-prof</code>, to add debugging information to WASM code. - <code>--perf-prof-unwinding-info</code>: This last one also extends <code>--perf-prof</code>, but providing experimental support for unwinding info.</p>\n<p>And then, we have interpreted code support. V8 does not generate JIT code for everything, and in many cases it runs interpreted code that calls common builtin code. So, in a stack trace, instead of seeing the Javascript method that eventually runs those methods through the interpreter, we see those common methods. The solution? Using <code>--interpreted-frames-native-stack</code>, that adds additional information that identifies which Javascript method in the stack is actually interpreted. This is basic to understand the attribution of running code, especially if it is not considered hot enough to be compiled.</p>\n<h2 id=\"they-are-disabled-by-default-is-it-a-problem\" tabindex=\"-1\">They are disabled by default, is it a problem? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>We would ideally want to have all of this profiling support always enabled when we are profiling code. For that, we would want to avoid having to pass any command line switch, so we can profile Javascript workloads without any additional configuration.</p>\n<p>But all these command line switches are disabled by default. Why? There are several reasons.</p>\n<h3 id=\"what-happens-on-linux\" tabindex=\"-1\">What happens on Linux? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h3>\n<p>In Linux, Perf profiling instrumentation will unconditionally generate additional files, as it cannot know if Perf is running or not. <code>--perf-basic-prof</code> backend writes the generated information to <code>.map</code> files, and <code>--perf-prof</code> backend writes additional information to <code>jit-*.dump</code> files that will be used later with <code>perf inject -j</code>.</p>\n<p>Additionally, the instrumentation requires code compaction to be disabled. This is because code compaction will change the memory location of the compiled code. V8 generates <code>CODE_MOVE</code> internal events for profiler instrumentation. But ETW and Linux Perf backends do not handle those events for a variety of reasons. It has an impact in memory as there is no way to compact the space allocated for code without code move.</p>\n<p>So, when we enable any of the Linux Perf command line switches we both generate extra files and take more memory.</p>\n<h3 id=\"and-what-about-windows\" tabindex=\"-1\">And what about Windows? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h3>\n<p>In Windows we have the same problem with code compaction. We do not support generating <code>CODE_MOVE</code> events so, while profiling, we cannot enable code compaction.</p>\n<p>And the emitted ETW events with JIT code position information do still take memory space and make the profile recordings bigger.</p>\n<p>But there is an important difference: Windows ETW API allows applications to know when they are being profiled, and even filter that. V8 only emits the code location information for ETW if the application is being profiled, and code compaction is only disabled when profiling is ongoing.</p>\n<p>That means the overhead for enabling <code>--enable-etw-stack-walking</code> is expected to be minimal.</p>\n<p>What about <code>--interpreted-frames-native-stack</code>? It has also been optimized to generate the extra stack information only when a profile is being recorded.</p>\n<h2 id=\"can-we-enable-them-by-default\" tabindex=\"-1\">Can we enable them by default? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>For Linux, the answer is a clear no. Any V8 workload would generate profiling information in files, and disabling code compaction makes memory usage higher. These problems happen no matter if you are recording a profile or not.</p>\n<p>But, Windows ETW support adds overhead only when profiling! It looks like it could make sense to enable both <code>--interpreted-frames-native-stack</code> and <code>--enable-etw-stack-walking</code> for Windows.</p>\n<h2 id=\"are-we-there-yet-overhead-analysis\" tabindex=\"-1\">Are we there yet? Overhead analysis <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>So first, we need to verify the actual overhead because we do not want to introduce a regression by enabling any of these options. I also want to confirm the actual overhead in Linux matches the expectation. To do that I am showing the result of running CSuite benchmarks, included as part of V8. I am capturing both the CPU and memory usage.</p>\n<h3 id=\"linux-benchmarks\" tabindex=\"-1\">Linux benchmarks <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h3>\n<p>The results obtained in Linux are…</p>\n<p>Legend: - REC: recording a profile (Yes, No). - No flag: running the test without any extra command line flag. - BP: passing <code>--perf-basic-prof</code>. - IFNS: passing <code>--interpreted-frames-native-stack</code>. - BP+IFNS: passing both <code>--perf-basic-prof</code> and <code>--interpreted-frames-native-stack</code>.</p>\n<p>Linux results - score:</p>\n<table>\n<thead>\n<tr>\n<th>Test</th>\n<th>REC</th>\n<th>Better</th>\n<th>No flag</th>\n<th>BP</th>\n<th>IFNS</th>\n<th>BP+IFNS</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Octane</td>\n<td>No</td>\n<td>Higher</td>\n<td>9101.2</td>\n<td>8953.3</td>\n<td>9077.1</td>\n<td>9097.3</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>9112.8</td>\n<td>9041.7</td>\n<td>9004.8</td>\n<td>9093.9</td>\n</tr>\n<tr>\n<td>Kraken</td>\n<td>No</td>\n<td>Lower</td>\n<td>4060.3</td>\n<td>4108.9</td>\n<td>4076.6</td>\n<td>4119.9</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>4078.1</td>\n<td>4141.7</td>\n<td>4083.2</td>\n<td>4131.3</td>\n</tr>\n<tr>\n<td>Sunspider</td>\n<td>No</td>\n<td>Lower</td>\n<td>595.7</td>\n<td>622.8</td>\n<td>595.4</td>\n<td>627.8</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>598.5</td>\n<td>626.6</td>\n<td>599.3</td>\n<td>633.3</td>\n</tr>\n</tbody>\n</table>\n<p>Linux results - memory (in Kilobytes)</p>\n<table>\n<thead>\n<tr>\n<th>Test</th>\n<th>REC</th>\n<th>No flag</th>\n<th>BP</th>\n<th>IFNS</th>\n<th>BP+IFNS</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Octane</td>\n<td>No</td>\n<td>244040.0</td>\n<td>249152.0</td>\n<td>243442.7</td>\n<td>253533.8</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>242234.7</td>\n<td>245010.7</td>\n<td>245632.9</td>\n<td>252108.0</td>\n</tr>\n<tr>\n<td>Kraken</td>\n<td>No</td>\n<td>46039.1</td>\n<td>46169.5</td>\n<td>46009.1</td>\n<td>46497.1</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>46002.1</td>\n<td>46187.0</td>\n<td>46024.4</td>\n<td>46520.5</td>\n</tr>\n<tr>\n<td>Sunspider</td>\n<td>No</td>\n<td>90267.0</td>\n<td>90857.2</td>\n<td>90214.6</td>\n<td>91100.4</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>90210.3</td>\n<td>90948.4</td>\n<td>90195.9</td>\n<td>91110.8</td>\n</tr>\n</tbody>\n</table>\n<p>What is seen in the results? - There is apparently no CPU or memory overhead of <code>--interpreted-frames-native-stack</code> alone. Though there is an outlier while recording Octane memory usage, that could be an error in the sampling process. This is expected as the additional information is only generated while any profiling instrumentation is enabled. - There is no clear extra cost in memory for recording vs not recording. This is expected as the extra information only depends on having the switches enabled. It does not detect when recording is ongoing. - There is, as expected, a cost in CPU when Perf is ongoing, in most of the cases. - <code>--perf-basic-prof</code> has CPU and memory impact. And combined with <code>--interpreted-frames-native-stack</code>, the impact is even higher as expected.</p>\n<p>This is on top of the fact that files are generated. The benchmarks back not enabling Perf <code>--perf-basic-prof</code> by default. But as <code>--interpreted-frames-native-stack</code> has only overhead in combination with profiling switches, it could make sense to consider enabling it on Linux.</p>\n<h3 id=\"windows-benchmarks\" tabindex=\"-1\">Windows benchmarks <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h3>\n<p>What about Windows results?</p>\n<p>Legend: - No flag: running the test without any extra command line flag. - ETW: passing <code>--enable-etw-stack-walking</code>. - IFNS: passing <code>--interpreted-frames-native-stack</code>. - ETW+IFNS: passing both <code>--enable-etw-stack-walking</code> and <code>--interpreted-frames-native-stack</code>.</p>\n<p>Windows results - score:</p>\n<table>\n<thead>\n<tr>\n<th>Test</th>\n<th>REC</th>\n<th>Better</th>\n<th>No flag</th>\n<th>ETW</th>\n<th>IFNS</th>\n<th>ETW+IFNS</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Octane</td>\n<td>No</td>\n<td>Higher</td>\n<td>8323.8</td>\n<td>8336.2</td>\n<td>8308.7</td>\n<td>8310.4</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>8050.9</td>\n<td>7991.3</td>\n<td>8068.8</td>\n<td>7900.6</td>\n</tr>\n<tr>\n<td>Kraken</td>\n<td>No</td>\n<td>Lower</td>\n<td>4273.0</td>\n<td>4294.4</td>\n<td>4303.7</td>\n<td>4279.7</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>4380.2</td>\n<td>4416.4</td>\n<td>4433.0</td>\n<td>4413.9</td>\n</tr>\n<tr>\n<td>Sunspider</td>\n<td>No</td>\n<td>Lower</td>\n<td>671.1</td>\n<td>670.8</td>\n<td>672.0</td>\n<td>671.5</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Yes</td>\n<td>&quot;</td>\n<td>693.2</td>\n<td>703.9</td>\n<td>690.9</td>\n<td>716.2</td>\n</tr>\n</tbody>\n</table>\n<p>Windows results - memory:</p>\n<table>\n<thead>\n<tr>\n<th>Test</th>\n<th>REC</th>\n<th>No flag</th>\n<th>ETW</th>\n<th>IFNS</th>\n<th>ETW+IFNS</th>\n</tr>\n</thead>\n<tbody>\n<tr>\n<td>Octane</td>\n<td>Not recording</td>\n<td>202535.1</td>\n<td>204944.9</td>\n<td>200700.4</td>\n<td>203557.8</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Recording</td>\n<td>205125.8</td>\n<td>204801.3</td>\n<td>206856.9</td>\n<td>209260.4</td>\n</tr>\n<tr>\n<td>Kraken</td>\n<td>Not recording</td>\n<td>76188.2</td>\n<td>76095.2</td>\n<td>76102.1</td>\n<td>76188.5</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Recording</td>\n<td>76031.6</td>\n<td>76265.0</td>\n<td>76215.6</td>\n<td>76662.0</td>\n</tr>\n<tr>\n<td>Sunspider</td>\n<td>Not recording</td>\n<td>31784.9</td>\n<td>31888.4</td>\n<td>31806.9</td>\n<td>31882.7</td>\n</tr>\n<tr>\n<td>&quot;</td>\n<td>Recording</td>\n<td>31848.9</td>\n<td>31882.1</td>\n<td>31802.7</td>\n<td>32339.0</td>\n</tr>\n</tbody>\n</table>\n<p>What is observed? - No memory or CPU overhead is observed when recording is not ongoing, with <code>--enable-etw-stack-walking</code> and/or <code>--interpreted-frames-native-stack</code>. - Even recording, there is no clearly visible overhead of <code>--interpreted-frames-native-stack</code>. - When recording, <code>--enable-etw-stack-walking</code> has an impact on CPU. But not on memory. - When <code>--enable-etw-stack-walking</code> and <code>--interpreted-frames-native-stack</code> are combined, while recording, both memory and CPU overheads are observed.</p>\n<p>So, again, it should be possible to enable <code>--interpreted-frames-native-stack</code> on Windows by default as it only has impact while recording a profile with <code>--enable-etw-stack-walking</code>. And it should be possible to enable <code>--enable-etw-stack-walking</code> by default too as, again, the impact happens only when a profile is recorded.</p>\n<h2 id=\"windows-there-are-still-problems\" tabindex=\"-1\">Windows: there are still problems <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>Is it that simple? Is it just OK only adding overhead when recording a profile?</p>\n<p>One problem with this is that there is an impact in system wide recordings, even if the focus is not on Javascript execution.</p>\n<p>Also, the CPU impact is not linear. Most of it happens when a profile recording starts, V8 emits the code position of all the methods in all the already existing Javascript functions and builtins. Now, imagine a regular desktop with several V8 based browsers as Chrome and Edge, with all their tabs. Or a server with many NodeJS instances. All that generates the information of all their methods at the same time.</p>\n<p>So the impact of the overhead is significant, even if it only happens while recording.</p>\n<h2 id=\"next-steps\" tabindex=\"-1\">Next steps <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>For <code>--interpreted-frames-native-stack</code> it looks like it would be better to propose enabling it by default in all studied platforms (Windows and Linux). It significantly improves the profiling instrumentation, and it has impact only when actual instrumentation is used. And then it still allows disabling it for the specific cases where the original builtins recording is preferred.</p>\n<p>For <code>--enable-etw-stack-walking</code>, it could make sense to also propose enabling it by default, even with the known overhead while recording profiles. And just make sure there are no surprise regressions.</p>\n<p>But, likely, the best option is trying to reduce that overhead. First, by allowing to filter better which processes generate the additional information from Windows Performance Recorder. And then, also, by reducing the initial overhead as much as possible. I.e. a big part of it is emitting the builtins and V8 snapshot compiled functions for each of the contexts again and again. Ideally we want to avoid that duplication if possible.</p>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>While on Linux, enabling profile instrumentation with command line switches is not an option, Windows dynamically enabling instrumentation allows to consider enabling ETW support by default. But there are still optimization opportunities that could be addressed before.</p>\n<h2 id=\"thanks\" tabindex=\"-1\">Thanks! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/16/v8-profiling-instrumentation-overhead/\">#</a></h2>\n<p>This report has been done as part of the collaboration between <a href=\"https://techatbloomberg.com\">Bloomberg</a> and <a href=\"https://www.igalia.com\">Igalia</a>. Thanks!</p>\n<p><a href=\"https://www.igalia.com\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/igalia-logo-white-text.svg\">\n<img src=\"https://blogs.igalia.com/dape/img/igalia_-_500px_-_RGB_-_Feb23-580x210.png\" alt=\"Igalia\">\n</picture></a> <a href=\"https://techatbloomberg.com\"><img src=\"https://blogs.igalia.com/dape/img/Bloomberg-logo-580x117.png\" alt=\"Bloomberg\" class=\"dark-invert\"></a></p>\n",
			"date_published": "2023-11-16T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/",
			"url": "https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/",
			"title": "Keep GCC running (2023 update)",
			"content_html": "<p>Last week I attended <a href=\"https://www.youtube.com/playlist?list=PL9ioqAuyl6UKYm7EYVa7FcKCR2kDCudII\">BlinkOn 18</a> in Sunnyvale Google offices. For the 5th time I presented the status of Chromium build using the GNU toolchain components: GCC compiler and libstdc++ standard library implementation.</p>\n<p>This blog post recaps the current status in 2023, as it was presented in the conference.</p>\n<h2 id=\"current-status\" tabindex=\"-1\">Current status <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h2>\n<p>First things first: GCC is still maintained, and working, on the official development releases! Though, as official build bots will not check that, fixes usually take a few extra days to land.</p>\n<p>This is the result of the work from contributors of several companies (<a href=\"https://www.igalia.com\">Igalia</a>, <a href=\"https://webos.developer.lge.com/\">LGE</a>, Vewd and others). But most important, from individual contributors (last 2 years the main contributor, with more than 50% of the commit has been Stephan Hartmann from Gentoo).</p>\n<h2 id=\"gcc-support\" tabindex=\"-1\">GCC support <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h2>\n<p>The work to support GCC is coordinated from the <a href=\"https://crbug.com/819294\">GCC Chromium meta bug</a>.</p>\n<p>Since I started tracking the GCC support we have been getting a quite stable number of contributions, 70-100 per year. Though in 2023 (even if we are counting only 8 months on this chart), we are way below 40.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/images/gcc_stats_2023-1.png\"><img src=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/images/gcc_stats_2023-1-580x290.png\" alt=\"\" class=\"dark-invert\"></a></p>\n<p>What happened? I am not really sure. Though, I have some ideas. First, simply as we move to newer versions of GCC, the implementation of recent C++ standards have improved. But then, also, this is the result of the great work that has been done recently in both GCC and LLVM, and also in standarization process, to get more interoperable implementations.</p>\n<h3 id=\"main-gcc-problems\" tabindex=\"-1\">Main GCC problems <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h3>\n<p>Now, I will focus on the most frequent causes for build breakage affecting GCC since January 2022.</p>\n<h4 id=\"variable-clashes\" tabindex=\"-1\">Variable clashes <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>The rules for visibility in C++ are slightly different in GCC. An example: if a class declares a getter with the same name of a declared type in current namespace, Clang will be mostly Ok with that. But GCC will fail with an error.</p>\n<p>Bad:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">using</span> Foo <span class=\"token operator\">=</span> <span class=\"token keyword\">int</span><span class=\"token punctuation\">;</span><br><br><span class=\"token keyword\">class</span> <span class=\"token class-name\">A</span> <span class=\"token punctuation\">{</span><br>    <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>    Foo <span class=\"token function\">Foo</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br>    <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br><span class=\"token punctuation\">}</span></code></pre>\n<p>A possible fix is renaming the accessor method name:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\">Foo <span class=\"token function\">GetFoo</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<p>Or using a explicit namespace for the type:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token double-colon punctuation\">::</span>Foo <span class=\"token function\">GetFoo</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span></code></pre>\n<h4 id=\"ambiguous-constructors\" tabindex=\"-1\">Ambiguous constructors <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>GCC may sometimes fail to resolve which constructor to use when there is an implicit type conversion. To avoid that, we can make that conversion explicit or use all braces initializers.</p>\n<h4 id=\"constexpr-is-more-strict-in-gcc\" tabindex=\"-1\"><code>constexpr</code> is more strict in GCC <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>In GCC a <code>constexpr</code> method declared as <code>default</code> demands all its parts to be also declared <code>constexpr</code>:</p>\n<p>Bad</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">int</span> <span class=\"token function\">MethodA</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span> <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span> <span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span><br><br><span class=\"token keyword\">constexpr</span> <span class=\"token function\">MethodB</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span> <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span> <span class=\"token function\">MethodA</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span> <span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span></code></pre>\n<p>Two possible fixes: or dropping <code>constexpr</code> from the using method, or adding <code>constexpr</code> to all used methods.</p>\n<h4 id=\"noexcept-strictness\" tabindex=\"-1\"><code>noexcept</code> strictness <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p><code>noexcept</code> strictness in GCC and Clang is the same when exceptions are enabled, requiring that all invoked functions used from a <code>noexcept</code> are also <code>noexcept</code>. Though, when exceptions are disabled using <code>-fno-exception</code>, Clang will ignore the errors. But GCC will still check the rules.</p>\n<p>This works in Clang and not in GCC if <code>-fno-exception</code> is set:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">class</span> <span class=\"token class-name\">A</span> <span class=\"token punctuation\">{</span><br>    <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>    <span class=\"token keyword\">virtual</span> <span class=\"token keyword\">int</span> <span class=\"token function\">Method</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br><span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span><br><br><span class=\"token keyword\">class</span> <span class=\"token class-name\">B</span> <span class=\"token punctuation\">{</span><br>    <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>    <span class=\"token keyword\">int</span> <span class=\"token function\">Method</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token keyword\">noexcept</span> <span class=\"token keyword\">override</span><span class=\"token punctuation\">;</span><br><span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span></code></pre>\n<h4 id=\"cpu-intrinsics-casts\" tabindex=\"-1\">CPU intrinsics casts <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>Implicit casts of CPU intrinsic types will fail in GCC, requiring explicit conversion or cast.</p>\n<p>As example:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">int8_t</span> input <span class=\"token function\">__attribute__</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token function\">vector_size</span><span class=\"token punctuation\">(</span><span class=\"token number\">16</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br><span class=\"token keyword\">uint16_t</span> output <span class=\"token operator\">=</span> <span class=\"token function\">_mm_movemask_epi8</span><span class=\"token punctuation\">(</span>input<span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<p>If we see the declaration of the intrinsic call:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">int</span> <span class=\"token function\">_mm_movemask_epi8</span> <span class=\"token punctuation\">(</span>__m128i a<span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<p>GCC will not allow implicit casting from an <code>int8_t __attribute__((vector_size(16)))</code>, though the storage is the same. In GCC we require a <code>reinterpret_cast</code>:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">int8_t</span> input <span class=\"token function\">__attribute__</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">(</span><span class=\"token function\">vector_size</span><span class=\"token punctuation\">(</span><span class=\"token number\">16</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span><br><span class=\"token keyword\">uint16_t</span> output <span class=\"token operator\">=</span> <span class=\"token function\">_mm_movemask_epi8</span><span class=\"token punctuation\">(</span><span class=\"token generic-function\"><span class=\"token function\">reinterpret_cast</span><span class=\"token generic class-name\"><span class=\"token operator\">&lt;</span>__m128i<span class=\"token operator\">></span></span></span><span class=\"token punctuation\">(</span>input<span class=\"token punctuation\">)</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<h4 id=\"template-specializations-need-to-be-in-a-namespace-scope\" tabindex=\"-1\">Template specializations need to be in a namespace scope <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>Template specializations are not allowed in GCC if they are not in a <code>namespace</code> scope. Usually this error materializes with developers adding the template specialization in the templated class scope.</p>\n<p>A failing example:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">namespace</span> a <span class=\"token punctuation\">{</span><br>    <span class=\"token keyword\">class</span> <span class=\"token class-name\">A</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>        <span class=\"token keyword\">template</span><span class=\"token operator\">&lt;</span><span class=\"token keyword\">typename</span> <span class=\"token class-name\">T</span><span class=\"token operator\">></span><br>        T <span class=\"token function\">Foo</span><span class=\"token punctuation\">(</span><span class=\"token keyword\">const</span> T<span class=\"token operator\">&amp;</span>t <span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>            <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>        <span class=\"token punctuation\">}</span><br><br>        <span class=\"token keyword\">template</span><span class=\"token operator\">&lt;</span><span class=\"token operator\">></span><br>        size_ <span class=\"token function\">Foo</span><span class=\"token punctuation\">(</span><span class=\"token keyword\">const</span> size_t<span class=\"token operator\">&amp;</span> t<span class=\"token punctuation\">)</span><br>    <span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span><br><span class=\"token punctuation\">}</span></code></pre>\n<p>The fix is moving the template specialization to the namespace scope:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">namespace</span> a <span class=\"token punctuation\">{</span><br>    <span class=\"token keyword\">class</span> <span class=\"token class-name\">A</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>        <span class=\"token keyword\">template</span><span class=\"token operator\">&lt;</span><span class=\"token keyword\">typename</span> <span class=\"token class-name\">T</span><span class=\"token operator\">></span><br>        T <span class=\"token function\">Foo</span><span class=\"token punctuation\">(</span><span class=\"token keyword\">const</span> T<span class=\"token operator\">&amp;</span>t <span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>            <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>        <span class=\"token punctuation\">}</span><br>    <span class=\"token punctuation\">}</span><span class=\"token punctuation\">;</span><br><br>    <span class=\"token keyword\">template</span><span class=\"token operator\">&lt;</span><span class=\"token operator\">></span><br>    size_ <span class=\"token class-name\">A</span><span class=\"token double-colon punctuation\">::</span><span class=\"token function\">Foo</span><span class=\"token punctuation\">(</span><span class=\"token keyword\">const</span> size_t<span class=\"token operator\">&amp;</span> t<span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>        <span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><br>    <span class=\"token punctuation\">}</span><br><span class=\"token punctuation\">}</span></code></pre>\n<h2 id=\"libstdc-support\" tabindex=\"-1\">libstdc++ support <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h2>\n<p>Regarding the C++ standard library implementation from GNU project, the work is coordinated in the <a href=\"https://crbug.com/957519\">libstdc++ Chromium meta bug</a>.</p>\n<p>We have definitely observed a big increase of required fixes in 2022, and projection of 2023 is going to be similar.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/images/libstdcxx_stats_2023-1.png\"><img src=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/images/libstdcxx_stats_2023-1-580x290.png\" alt=\"\" class=\"dark-invert\"></a></p>\n<p>In this case there are two possible reasons. One is that we are more exhaustively tracking the work using the meta bug.</p>\n<h3 id=\"main-libstdc-problems\" tabindex=\"-1\">Main libstdc++ problems <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h3>\n<p>In the case of libstdc++, these are the most frequent causes for build breakage since January 2022.</p>\n<h4 id=\"missing-includes\" tabindex=\"-1\">Missing includes <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>The libc++ implementation is different from libstdc++. And that implies some library headers that could be indirectly included in libc++ are not in libstdc++.</p>\n<h4 id=\"stl-containers-not-allowing-const-members\" tabindex=\"-1\">STL containers not allowing const members <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>Let’s see this code:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\">std<span class=\"token double-colon punctuation\">::</span>vector<span class=\"token operator\">&lt;</span><span class=\"token keyword\">const</span> <span class=\"token keyword\">int</span><span class=\"token operator\">></span> v<span class=\"token punctuation\">;</span></code></pre>\n<p>In Clang, this is allowed. Though, in GCC there is an explicit assert forbidding it: <code>std::vector must have a non-const, non-volatile value_type</code>.</p>\n<p>This is not only specific to <code>std::vector</code>, but also to <code>std::unordered:*</code>, <code>std::list</code>, <code>std::set</code>, <code>std::deque</code>, <code>std::forward_list</code> or <code>std::multiset</code>.</p>\n<p>The solution? Just do not use const as members:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\">std<span class=\"token double-colon punctuation\">::</span>vector<span class=\"token operator\">&lt;</span><span class=\"token keyword\">int</span><span class=\"token operator\">></span> v<span class=\"token punctuation\">;</span></code></pre>\n<h4 id=\"destructor-of-unique-ptr-requires-declaration-of-contained-type\" tabindex=\"-1\">Destructor of unique_ptr requires declaration of contained type <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h4>\n<p>Assigning from nullptr to std::unique_ptr requires destructor declaration</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token keyword\">class</span> <span class=\"token class-name\">A</span><span class=\"token punctuation\">;</span><br><br>std<span class=\"token double-colon punctuation\">::</span>unique_ptr<span class=\"token operator\">&lt;</span>A<span class=\"token operator\">></span> <span class=\"token function\">GetValue</span><span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span> <span class=\"token punctuation\">{</span><br>    <span class=\"token keyword\">return</span> <span class=\"token keyword\">nullptr</span><span class=\"token punctuation\">;</span><br><span class=\"token punctuation\">}</span></code></pre>\n<p>Usually it is just needed to include the full declaration of the class, as it needs the size of the type for the default destructor.</p>\n<h2 id=\"yocto-meta-chromium-layer\" tabindex=\"-1\">Yocto meta-chromium layer <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h2>\n<p>In my case, to verify GCC and libstdc++ support, on top of building Chromium in Ubuntu using both, I am also maintaining a Yocto layer that builds Chromium development releases. I usually try to verify the build in less than a week after each release.</p>\n<p>The layer is available at <a href=\"https://github.com/Igalia/meta-chromium\">github.com/Igalia/meta-chromium</a></p>\n<p>I am regularly testing the build using <code>core-image-weston</code> in both Raspberry PI 4 (64 bits) and Intel x86-64 (using Qemu). There I try both X11 and Wayland Ozone backends.</p>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/11/06/keep-gcc-running-2023-update/\">#</a></h2>\n<p>I would still recommend people to move to use the officially supported Chromium toolchains: libc++ and Clang. With them, downstreams get a far more tested implementation, more security features, or better integration for sanitizers. But, meanwhile, I expect things will still be kept working as several downstreams and distributions will still ship Chromium on top of the GNU toolchains.</p>\n<p>Even with the GNU toolchain not being officially supported in Chromium, the community has been successful providing support for both GCC and libstdc++. Thanks to all the contributors!</p>\n",
			"date_published": "2023-11-06T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2023/07/28/speeding-up-v8-heap-snapshot/",
			"url": "https://blogs.igalia.com/dape/2023/07/28/speeding-up-v8-heap-snapshot/",
			"title": "Speeding up V8 heap snapshot",
			"content_html": "<p>My last post, <a href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">Javascript memory profiling with heap snapshot</a>, finished announcing I would write a follow up post about several optimizations I implemented that make heap snapshot faster.</p>\n<p>Good news! The post has been accepted in V8.dev! You can read it <a href=\"https://v8.dev/blog/speeding-up-v8-heap-snapshots\">here</a>.</p>\n",
			"date_published": "2023-07-28T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/",
			"url": "https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/",
			"title": "Javascript memory profiling with heap snapshot",
			"content_html": "<p>In both web and NodeJS worlds, the main runtime for executing program logic is the Javascript runtime. Because of that, a huge number of applications and user interfaces are using it. As any software component, Javascript code uses resources of the system, that are not unlimited. We should be careful when using CPU time, application storage, or <strong>memory</strong>.</p>\n<p>In this blog post we are going to focus on the latter.</p>\n<h2 id=\"where-s-my-memory\" tabindex=\"-1\">Where’s my memory! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>Usually the objects allocated by a web page are not a lot, so they do not eat a huge amount of memory for a modern and beefy computer. But we find problems like:</p>\n<ul>\n<li>Oh, but I don’t have a single web page loaded. I like those 40-80 tabs all open for some reason… Well, no, there’s no reason for that! But that’s another topic.</li>\n<li>Many users are not using beefy phones or computers. So using memory has an impact on what they can do.</li>\n</ul>\n<p>The user may not be happy with the web application developer implementation choices. And this developer may want to be… more efficient. Do something.</p>\n<h2 id=\"where-s-my-memory-the-cloud-strikes-back\" tabindex=\"-1\">Where’s my memory! The cloud strikes back <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>Now… Think about the cloud providers. And developers implementing software using NodeJS in the cloud. The contract with the provider may limit the available memory… Or get money depending on the actual usage.</p>\n<p>So… An innocent script that takes 10MB, but is run thousands or millions or times for a few seconds. That <strong>is</strong> expensive!</p>\n<p>These developers will need to make their apps… again, more efficient.</p>\n<h2 id=\"a-new-hope\" tabindex=\"-1\">A new hope <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>In performance problems, we usually want to have reliable data of what is happening, and when. Memory problems are no different. We need some observability of the memory usage.</p>\n<p>Chromium and NodeJS share their Javascript runtime, V8, and it provides some tools to help with memory investigation.</p>\n<p>In this post we are going to focus on the family of tools around a V8 feature named <em>heap snapshot</em>, that allows capturing the memory usage at any time in a Javascript execution context.</p>\n<h2 id=\"about-the-heap\" tabindex=\"-1\">About the heap <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>:!: This is a fast recap on how Javascript heap works, you can skip it if you want</p>\n<p>In V8 Javascript runtime, variables, no matter their scope, are allocated on a heap. No matter if it is a number, a string, an object or a function, all of them are stored there. Not only that, in V8 even the code is stored in the heap.</p>\n<p>But, in Javascript, memory is freed lazily, with a garbage collection. This means that, when an object is not used anymore, its memory is not immediately disposed. Garbage collector will explore which objects are disposable later, and free them when it is convenient.</p>\n<p>How do we know if an object is still used? The idea is simple: objects are used if they can be accessed. To find out which ones, the runtime will take the root objects, and explore recursively all the object references. Any object that has not been found in that exploration can be discarded.</p>\n<p>OK, and what is a <em>root object</em>? In a script it can be the objects in the global context. But also Javascript objects referred from native objects.</p>\n<p>More details of how the V8 garbage collector works are out of the scope of this post. If you want to learn more, this post should provide a good overview of current implementation: <a href=\"https://v8.dev/blog/trash-talk\">Trash talk: the Orinoco garbage collector</a>.</p>\n<h2 id=\"heap-snapshot-how-does-it-work\" tabindex=\"-1\">Heap snapshot: how does it work? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>OK, so we know all the Javascript memory allocation goes through the heap. And, as I said, <em>heap snapshot</em> is a tool to investigate memory problems.</p>\n<p>The name is quite explicit about how it works. <em>Heap snapshot</em> will stop the Javascript execution, traverse all the heap, analyze it, and dump it in a meaningful format that can be investigated.</p>\n<p>What kind of information does it have?</p>\n<ul>\n<li>Which objects are in the heap, and their types.</li>\n<li>How much memory each object takes.</li>\n<li>The references between them, so we can understand which object is keeping another one from being disposed.</li>\n<li>In some of the tools, it can also store the stack trace of the code that allocated that memory.</li>\n</ul>\n<p>The format of those snapshots is using JSON, and it can be opened from Chromium developer tools for analysis.</p>\n<h2 id=\"heap-snapshots-from-chromium\" tabindex=\"-1\">Heap snapshots from Chromium <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>In the Chromium browser, heap snapshots can be obtained from the Chrome developer tools, accessed through the <em>Inspect</em> right button menu option.</p>\n<p>This is common to any browser based in Chromium exposing those developer tools locally or remotely.</p>\n<p>Once the developer tools are visible, there is the <em>Memory</em> tab:</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/images/Captura-de-pantalla-de-2023-05-12-14-35-17.png\"><img src=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/images/Captura-de-pantalla-de-2023-05-12-14-35-17-580x605.png\" alt=\"\"></a></p>\n<p>We can select three profiling types:</p>\n<ul>\n<li><em>Heap snapshot</em>: it just captures the heap at the specific moment it is captured.</li>\n<li><em>Allocation instrumentation on timeline</em>: this records all the allocations over time, in a session, allowing to check the allocation that happened in a specific time range. This is quite expensive, and suitable only for short profiling sessions.</li>\n<li><em>Allocation sampling</em>: instead of capturing all allocations, this one records them with sampling. Not as accurate as allocation instrumentation, but very lightweight, allowing to give a good approximation for a long profiling session.</li>\n</ul>\n<p>In all cases, we will get a profiling report that we can analyze later.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/images/Captura-de-pantalla-de-2023-05-12-14-36-12.png\"><img src=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/images/Captura-de-pantalla-de-2023-05-12-14-36-12-580x605.png\" alt=\"\"></a></p>\n<h2 id=\"heap-snapshots-from-nodejs\" tabindex=\"-1\">Heap snapshots from NodeJS <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<h3 id=\"using-chromium-dev-tools-ui\" tabindex=\"-1\">Using Chromium dev tools UI <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h3>\n<p>In NodeJS, we can attach the Chrome dev tools passing <code>--inspect</code> through the command line or the <code>NODE_OPTIONS</code> environment variable. This will attach the inspector to NodeJS, but it does not stop execution. The variant <code>--inspect-brk</code> will break on debugger at start of the user script.</p>\n<p>How does it work? It will open a port in <code>localhost:9229</code>, and then this can be accessed from Chromium browser URL <code>chrome://inspect</code>. The UI allows users to select which hosts to listen to for Node sessions. The end point can be modified using <code>--inspect=[HOST:]PORT</code>, <code>--inspect-brk=[HOST:]PORT</code> or with the specific command line argument <code>--inspect-port=[HOST:]PORT</code>.</p>\n<p>Once you attach dev tools inspector, you can access the <em>Memory</em> tab as in the case of Chromium</p>\n<p>There is a problem, though, when we are using <code>NODE_OPTIONS</code>. All instances of NodeJS will take the same parameter, so they will try to attach to the same host and port. And only the first instance will get the port. So it is less useful than we would expect for a session running multiple NodeJS processes (as it can be just running NPM or YARN to run stuff).</p>\n<p>Oh, but there are some tricks!:</p>\n<ul>\n<li>If you pass port <code>0</code> it will allocate a port (and report it through the console!). So you <strong>can</strong> inspect any arbitrary session (<a href=\"https://github.com/nodejs/node/issues/8080\">more details</a>).</li>\n<li>In POSIX systems such as Linux, the inspector will be enabled if the process receives <code>SIGUSR1</code>. This will run in default <code>localhost:9229</code> unless a different setting is specified with <code>--inspect-port=[HOST:]PORT</code> (<a href=\"https://nodejs.org/en/docs/guides/debugging-getting-started\">more details</a>).</li>\n</ul>\n<h3 id=\"using-command-line\" tabindex=\"-1\">Using command line <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h3>\n<p>Also, there are other ways to obtain heap snapshots directly, without using developer tools UI. NodeJS allows to pass different command line parameters for programming heap snapshot capture/profiling:</p>\n<ul>\n<li><code>--heapsnapshot-near-heap-limit=N</code> will dump a heap snapshot when the V8 heap is close to its maximum size limit. The <code>N</code> parameter is the number of times it will dump a new snapshot. This is important because, when V8 is reaching the heap limit, it will take measures to free memory through garbage collection, so in a pattern of growing usage we will hit the limit several times.</li>\n<li><code>--heapsnapshot-signal=SIGNAL</code> will dump heap snapshots every time the NodeJS process gets the UNIX signal <code>SIGNAL</code>.</li>\n</ul>\n<p>We can also record a heap profiling session from the start of the process to the end (same kind of profiling we obtain from Dev Tools using <em>Allocation sampling</em> option) using command line option <code>--heap-prof</code>. This will sample continuously the memory allocations, and can be tuned using different command line parameters as documented <a href=\"https://nodejs.org/api/cli.html#--heap-prof\">here</a>.</p>\n<h2 id=\"analysis-of-heap-snapshots\" tabindex=\"-1\">Analysis of heap snapshots <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>The scope of this post is about how to capture heap snapshots in different scenarios. But… once you have them… You will want to use that information to actually understand memory usage. Here are some good reads about how to use heap snapshots.</p>\n<p>First, from Chrome DevTools documentation:</p>\n<ul>\n<li><a href=\"https://developer.chrome.com/docs/devtools/memory-problems/memory-101/\">Memory terminology</a>: it gives a great tour on how memory is allocated, and what heap snapshots try to represent.</li>\n<li><a href=\"https://developer.chrome.com/docs/devtools/memory-problems/\">Fix memory problems</a>: this one provides some examples of how to use different tools in Chromium to understand memory usage, including some heap snapshot and profiling examples.</li>\n<li><a href=\"https://developer.chrome.com/docs/devtools/memory-problems/heap-snapshots/#view_snapshots\">View snapshots</a>: a high level view of the different heap snapshot and profiling tools.</li>\n<li><a href=\"https://developer.chrome.com/docs/devtools/memory-problems/allocation-profiler/\">How to Use the Allocation Profiler Tool</a>: this one specific to the allocation profiler.</li>\n</ul>\n<p>And then, from NodeJS, you have also a couple of interesting things:</p>\n<ul>\n<li><a href=\"https://nodejs.org/en/docs/guides/diagnostics/memory/using-heap-snapshot\">Memory Diagnostics</a>: some of this has been covered in this post, but still has an example of how to find a memory leak using <em>Comparison</em>.</li>\n<li><a href=\"https://github.com/naugtur/node-example-heapdump\">Heap snapshot exercise</a>: this is an exercise including a memory leak, that you can hunt with heap snapshot.</li>\n</ul>\n<h2 id=\"recap\" tabindex=\"-1\">Recap <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<ul>\n<li>Memory is a valuable resource that Javascript (both web and NodeJS) application developers may want to profile.</li>\n<li>As usual, when there are resource allocation problems, we need reliable and accurate information about what is happening and when.</li>\n<li>V8 heap snapshots provide such information, integrated with Chromium and NodeJS.</li>\n</ul>\n<h2 id=\"next\" tabindex=\"-1\">Next <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>In a follow up post, I will talk about several optimizations we worked on, that make V8 heap snapshot implementation faster. Stay tuned!</p>\n<h2 id=\"thanks\" tabindex=\"-1\">Thanks! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/05/18/javascript-memory-profiling-with-heap-snapshot/\">#</a></h2>\n<p>This work has been thanks to the sponsorship from Igalia and Bloomberg.</p>\n<p><a href=\"https://www.igalia.com\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/igalia-logo-white-text.svg\">\n<img src=\"https://blogs.igalia.com/dape/img/igalia_-_500px_-_RGB_-_Feb23-580x210.png\" alt=\"Igalia\">\n</picture></a> <a href=\"https://techatbloomberg.com\"><img src=\"https://blogs.igalia.com/dape/img/Bloomberg-logo-580x117.png\" alt=\"Bloomberg\" class=\"dark-invert\"></a></p>\n",
			"date_published": "2023-05-18T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/",
			"url": "https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/",
			"title": "Stack walk profiling NodeJS in Windows",
			"content_html": "<p>Last year I wrote a series of blog posts (<a href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">1</a>, <a href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">2</a>, <a href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">3</a>) about stack walk profiling Chromium using Windows native tools around <a href=\"https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-\">ETW</a>.</p>\n<p>A fast recap: ETW support for stack walking in V8 allows to show V8 JIT generated code in the Windows Performance Analyzer. This is a powerful tool to analyze work loads where Javascript execution time is significant.</p>\n<p>In this blog post, I will cover the usage of this very same tool, but to analyze NodeJS execution.</p>\n<h2 id=\"enabling-stack-walk-jit-information-in-nodejs\" tabindex=\"-1\">Enabling stack walk JIT information in NodeJS <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>In an ideal situation, V8 engines would always generate stack walk information when Windows is profiling. This is something we will want to consider in the future, as we prove enabling it has no cost if we are not in a tracing session.</p>\n<p>Meanwhile, we need to set the V8 flag <code>--enable-etw-stack-walking</code> somehow. This will install hooks that, when a profiling session starts, will emit the JIT generated code addresses, and the information about the source code associated to them.</p>\n<p>For a command line execution of NodeJS runtime, it is as simple as passing the command line flag:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">node</span> --enable-etw-stack-walking</code></pre>\n<p>This will work enabling ETW stack walking for that specific NodeJS session… Good, but not very useful.</p>\n<h2 id=\"enabling-etw-stack-walking-for-a-session\" tabindex=\"-1\">Enabling ETW stack walking for a session <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>What’s the problem here? Usually, NodeJS is invoked indirectly through other tools (based or not in NodeJS). Some examples are Yarn, NPM, or even some Windows scripts or link files.</p>\n<p>We could tune all the existing launching scripts to pass <code>--enable-etw-stack-walking</code> to the NodeJS runtime when it is called. But that is not much convenient.</p>\n<p>There is a better way though, just using <code>NODE_OPTIONS</code> environment variable. This way, stack walking support can be enabled for all NodeJS calls in a shell session, or even system wide.</p>\n<h2 id=\"bad-news-and-good-news\" tabindex=\"-1\">Bad news and good news <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>Some bad news: NodeJS was refusing <code>--enable-etw-stack-walking</code> in <code>NODE_OPTIONS</code>. There is a filter for which V8 options it accepts (mostly for security purposes), and ETW support was not considered.</p>\n<p>Good news? I implemented <a href=\"https://github.com/nodejs/node/pull/46203\">a fix adding the flag to the list accepted by NODE_OPTIONS</a>. It has been landed already, and it is available from NodeJS 19.6.0. Unfortunately, if you are using an older version, then you may need to backport the patch.</p>\n<h2 id=\"using-it-linting-typescript\" tabindex=\"-1\">Using it: linting TypeScript <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>To explain how this can be used, I will analyse <a href=\"https://eslint.org/\">ESLint</a> on a known workload: <a href=\"https://www.typescriptlang.org/\">TypeScript</a>. For simplicity, we are using the <code>lint</code> task provided by TypeScript.</p>\n<p>This example assumes the usage of <em>Git Bash</em>.</p>\n<p>First, clone <em>TypeScript</em> from <em>GitHub</em>, and go to the cloned copy:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">git</span> clone https://github.com/microsoft/TypeScript.git<br><span class=\"token builtin class-name\">cd</span> TypeScript</code></pre>\n<p>Then, install <em>hereby</em> and the dependencies of <em>TypeScript</em>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token function\">npm</span> <span class=\"token function\">install</span> <span class=\"token parameter variable\">-g</span> hereby<br><span class=\"token function\">npm</span> ci</code></pre>\n<p>Now, we are ready to profile the <code>lint</code> task. First, set <code>NODE_OPTIONS</code>:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\"><span class=\"token builtin class-name\">export</span> <span class=\"token assign-left variable\">NODE_OPTIONS</span><span class=\"token operator\">=</span><span class=\"token string\">\"--enable-etw-stack-walking\"</span></code></pre>\n<p>Then, launch <a href=\"https://github.com/google/UIforETW\">UIForETW</a>. This tool simplifies capturing traces, and will provide good defaults for Javascript ETW analysis. It provides a very useful keyboard shortcut, <Ctrl>+<Win>+R, to start and then stop a recording.</Win></Ctrl></p>\n<p>Switch to <em>Git Bash</em> terminal and do this sequence:</p>\n<ul>\n<li>Write (<strong>without pressing <Enter></Enter></strong>): <code>hereby lint</code></li>\n<li>Press <Ctrl>+<Win>+R to start recording. Wait 3-4 seconds as recording does not start immediately.</Win></Ctrl></li>\n<li>Press <Enter>. <em>ESLint</em> will traverse all the <em>TypeScript</em> code.</Enter></li>\n<li>Press again <Ctrl>+<Win>+R to stop recording.</Win></Ctrl></li>\n</ul>\n<p>After a few seconds <em>UIForETW</em> will automatically open the trace in Windows Performance Analyzer. Thanks to settings <code>NODE_OPTIONS</code> all the child processes of the parent <code>node.exe</code> execution also have stack walk information.</p>\n<h3 id=\"randomascii-inclusive-stack-analysis\" tabindex=\"-1\">Randomascii inclusive (stack) analysis <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h3>\n<p>Focusing on <code>node.exe</code> instances, in <em>Randomascii inclusive (stack)</em> view, we can see where time is spent for each of the <code>node.exe</code> processes. If I take the bigger one (that is the longest of the benchmarks I executed), I get some nice insights.</p>\n<p>The worker threads take 40% of the CPU processing. What is happening there? I basically see JIT compilation and garbage collection concurrent marking. V8 offloads that work, so there is a benefit from a multicore machine.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/worker-thread-compile-and-gc.png\"><img src=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/worker-thread-compile-and-gc-580x197.png\" alt=\"\"></a></p>\n<p>Most of the work happens in the main thread, as expected. And most of the time is spent parsing and applying the <em>lint</em> rules (half for each).</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/main-thread-rules-and-parse.png\"><img src=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/main-thread-rules-and-parse-580x187.png\" alt=\"\"></a></p>\n<p>If we go deeper in the rules processing, we can see which rules are more expensive.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/lint-rules-error-handler.png\"><img src=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/lint-rules-error-handler-580x284.png\" alt=\"\"></a></p>\n<h3 id=\"memory-allocation\" tabindex=\"-1\">Memory allocation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h3>\n<p>In total commit view, we can observe the memory usage pattern of the process running <em>ESLint</em>. For most of the seconds of the workload, allocation grows steadily (to over 2GB of RAM). Then there is a first garbage collection, and a bit later, the process finishes and all the memory is deallocated.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/memory-committed.png\"><img src=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/images/memory-committed-580x141.png\" alt=\"\"></a></p>\n<h3 id=\"more-findings\" tabindex=\"-1\">More findings <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h3>\n<p>At first sight, I observe we are creating the rules objects for all the execution of <em>ESLint</em>. What does it mean? Could we run faster reusing those? I can also observe that a big part of the time in main thread leads to leaves doing garbage collection.</p>\n<p>This is a good start! You can see how ETW can give you insights of what is happening and how much time it takes. And even correlate that to memory usage, File I/O, etc.</p>\n<h2 id=\"builtins-fix\" tabindex=\"-1\">Builtins fix <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>Using NodeJS, as is today, will still show many missing lines in the stack. I did those tests, and could do a useful analysis, because I applied a very recent patch I landed in V8.</p>\n<p>Before the fix, we would have this sequence:</p>\n<ul>\n<li>Enable ETW recording</li>\n<li>Run several NodeJS tests.</li>\n<li>Each of the tests creates one or more JS contexts.</li>\n<li>That context then sends to ETW the information of any code compiled with JIT.</li>\n</ul>\n<p><strong>But</strong> there was a problem: any JS context has already a lot of pre-compiled code associated: builtins and V8 snapshot code. Those were missing from the ETW traces captured.</p>\n<p><a href=\"https://chromium-review.googlesource.com/c/v8/v8/+/4241160\">The fix</a>, as said, has been already landed to V8, and hopefully will be available soon in future NodeJS releases.</p>\n<h2 id=\"wrapping-up\" tabindex=\"-1\">Wrapping up <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2023/03/14/stack-walk-profiling-nodejs-in-windows/\">#</a></h2>\n<p>There is more work to do:</p>\n<ul>\n<li>WASM is still not supported.</li>\n<li>Ideally, we would want to have <code>--enable-etw-stack-walking</code> set by default, as the impact while not tracing is minimal.</li>\n</ul>\n<p>In any case, after these new fixes, capturing ETW stack walks of code executed by NodeJS runtime is a bit easier. I hope this gives some joy to your performance research.</p>\n<p>One last thing! My work for these fixes is possible thanks to the sponsorship from <a href=\"https://www.igalia.com\">Igalia</a> and <a href=\"https://www.bloomberg.com/company/values/tech-at-bloomberg/\">Bloomberg</a>.</p>\n",
			"date_published": "2023-03-14T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/",
			"url": "https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/",
			"title": "Native call stack profiling (3/3): 2022 work in V8",
			"content_html": "<p>This is the last blog post of the series. In <a href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">first post</a> I presented some concepts of call stack profiling, and why it is useful. In <a href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">second post</a> I reviewed Event Tracing for Windows, the native tool for the purpose, and how it can be used to trace Chromium.</p>\n<p>This last post will review the work done in 2022 to improve the support in V8 of call stack profiling in Windows.</p>\n<p>I worked on several of the fixes this year. This work has been sponsored by <a href=\"https://www.bloomberg.com/company/values/tech-at-bloomberg/\">Bloomberg</a> and <a href=\"https://www.igalia.com\">Igalia</a>.</p>\n<p>This work was presented as a <a href=\"https://youtu.be/39qvqvuJxT8\">lightning talk in BlinkOn 17</a>.</p>\n<h2 id=\"some-bad-news-to-start-and-a-fix\" tabindex=\"-1\">Some bad news to start… and a fix <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>In March I started working on the report that Windows event traces where not properly resolving the Javascript symbols.</p>\n<p>After some bisecting I found this was a regression introduced by <a href=\"https://chromium.googlesource.com/chromium/src/+/dcd255e349c35228be35f2f43232a2b1c5605e80\">this commit</a>, that changed the <code>--js-flags</code> handling to a later stage. This happened to be after V8 initialization, so the code that would enable instrumentation would not consider the flag.</p>\n<p><a href=\"https://chromium-review.googlesource.com/c/chromium/src/+/3529090\">The fix I implemented</a> moved flags processing to happen right before platform initialization, so instrumentation worked again.</p>\n<h2 id=\"simplified-method-names\" tabindex=\"-1\">Simplified method names <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>Another fix I worked was to improve the methods name generation. Windows tracing would show a quite redundant description of each level, and that was making analysis more difficult.</p>\n<p>Before my work, the entries would look like this:</p>\n<pre><code>string-tagcloud.js!LazyCompile:~makeTagCloud- string-tagcloud.js:231-232:22 0x0\n</code></pre>\n<p>After <a href=\"https://chromium-review.googlesource.com/c/v8/v8/+/3763862\">my change</a>, now it looks like this:</p>\n<pre><code>string-tagcloud.js!makeTagCloud-231:22 0x0\n</code></pre>\n<p>The fix adds a specific implementation for ETW. Instead of reusing the method name that is also used for Perf, it has a specific implementation for function that takes into account what ETW backend exports already, to avoid redundancy. It also takes advantage of the existing method <code>DebugNameCStr</code> to retrieve inferred method names in case there is no name available.</p>\n<h2 id=\"problem-with-javascript-code-compiled-before-tracing\" tabindex=\"-1\">Problem with Javascript code compiled before tracing <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>The way V8 ETW worked was that, when tracing was ongoing and a new function was compiled in JIT, it would emit information to ETW.</p>\n<p>This implied a big problem. If a function was compiled by V8 before tracing started, then ETW would not properly resolve the function names so, when analyzing the traces, it would not be possible to know which function was called at any of the samples.</p>\n<p><a href=\"https://chromium-review.googlesource.com/c/v8/v8/+/3705541\">The solution</a> is conceptually simple. When tracing starts, V8 traverse the living Javascript contexts and emit all the symbols. This adds noise to the tracing, as it is an expensive process. But, as it happens at the start of the tracing, it is very easy to isolate in the captured trace.</p>\n<h2 id=\"and-a-performance-fix\" tabindex=\"-1\">And a performance fix <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>I also fixed a <a href=\"https://chromium-review.googlesource.com/c/v8/v8/+/3669698\">huge performance penalty when tracing code from snapshots</a>, caused by calculating all the time the end line numbers of code instead of caching it.</p>\n<h2 id=\"initialization-improvements\" tabindex=\"-1\">Initialization improvements <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>Paolo Severini improved the initialization code, so the initialization of an ETW session was lighter, and also tracing would be started or stopped correctly.</p>\n<h2 id=\"benchmarking-etw-overhead\" tabindex=\"-1\">Benchmarking ETW overhead <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>After all these changes I did some benchmarking with and without ETW. The goal was knowing if it would be good to enable by default ETW support in V8, not requiring to pass any JS flag.</p>\n<p>With Sunspider in a Windows 64 bits build:</p>\n<p><a href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/images/etw-and-interpreted-frames-native-stack-overhead.svg\"><img src=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/images/etw-and-interpreted-frames-native-stack-overhead.svg\" alt=\"Image showing slight overhead with ETW and bigger one with interpreted frames.\"></a></p>\n<p>Other benchmarks I tried gave similar numbers.</p>\n<p>So far, in 64 bits architecture I could not detect any overhead of enabling ETW support when recording is not happening, and the cost when it is enabled is very low.</p>\n<p>Though, when combined with interpreted frames native stack, the overhead is close to 10%. This was expected as explained <a href=\"https://v8.dev/docs/linux-perf#v8-linux-perf-flags\">here</a>.</p>\n<p>So, good news so far. We still need to benchmark 32 bit architecture to see if the impact is similar.</p>\n<h2 id=\"try-it\" tabindex=\"-1\">Try it! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>The work described in this post is available in V8 10.9.0. I hope you enjoy the improvements, and specially hope these tools help in the investigation of performance issues around Javascript, in NodeJS, Google Chrome or Microsoft Edge.</p>\n<h2 id=\"what-next\" tabindex=\"-1\">What next? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>There is still a lot of things to do, and I hope I can continue working on improvements for V8 ETW support next year:</p>\n<ul>\n<li>First, finishing the benchmarks, and considering to enable ETW instrumentation by default in V8 and derivatives.</li>\n<li>Add full support for WASM.</li>\n<li>Bugfixing, as we still see segments missing in certain benchnarmks.</li>\n<li>Create specific events for when the JIT information of already compiled symbols is sent to ETW, to make it easier to differenciate from the code compiled while recording a trace.</li>\n</ul>\n<p>If you want to track the work, keep an eye on <a href=\"https://bugs.chromium.org/p/v8/issues/detail?id=11043\">V8 issue 11043</a>.</p>\n<h2 id=\"the-end\" tabindex=\"-1\">The end <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/12/21/native-call-stack-profiling-3-3-2022-work-in-v8/\">#</a></h2>\n<p>This is the last post in the series.</p>\n<p>Thanks to Bloomberg and Igalia for sponsoring my work in ETW Chromium integration improvements!</p>\n",
			"date_published": "2022-12-21T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/",
			"url": "https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/",
			"title": "Native call stack profiling (2/3): Event Tracing for Windows and Chromium",
			"content_html": "<p>In <a href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">last blog post</a>, I introduced call stack profiling, why it is useful, and how a system wide support can be useful. This new blog post will talk about Windows native call stack tracing, and how it is integrated in Chromium.</p>\n<h2 id=\"event-tracing-for-windows-etw\" tabindex=\"-1\">Event Tracing for Windows (ETW) <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h2>\n<p><a href=\"https://learn.microsoft.com/en-us/windows-hardware/drivers/devtest/event-tracing-for-windows--etw-\">Event Tracing for Windows</a>, usually also named with the acronym ETW, is a Windows kernel based tool that allows to log kernel and application events to a file.</p>\n<p>A good description of its architecture is available at <a href=\"https://learn.microsoft.com/en-us/windows/win32/etw/about-event-tracing\">Microsoft Learn: About Event Tracing</a>.</p>\n<p>Essentially, it is an efficient event capturing tool, in some ways similar to <a href=\"https://lttng.org/\">LTTng</a>. Its events recording stage is as lightweight as possible to avoid processing of collected data impacting the results as much as possible, reducing the <a href=\"https://en.wikipedia.org/wiki/Observer_effect_(information_technology)\">observer effect</a>.</p>\n<p>The main participants are:</p>\n<ul>\n<li>Providers: kernel components (including device drivers) or applications that emit log events.</li>\n<li>Controllers: tools that control when a recording session starts and stops, which providers to record, and what each provider is expected to log. Controllers also decide where to dump the recorded data (typically a file).</li>\n<li>Consumers: tools that can read and analyze the recorded data, and combine with system information (i.e. debugging information). Consumers will usually get the data from previously recorded files, but it is also possible to consume tracing information in real time.</li>\n</ul>\n<p>What about call stack profiling? ETW <a href=\"https://learn.microsoft.com/en-us/previous-versions/windows/desktop/xperf/stack-walking\">supports call stack sampling</a>, allowing to capture call stacks when certain events happen, and associates the call stack to that event. <a href=\"https://randomascii.wordpress.com/\">Bruce Dawson</a> has written <a href=\"https://randomascii.wordpress.com/2012/05/08/the-lost-xperf-documentationcpu-sampling/\">a fantastic blog post</a> about the topic.</p>\n<h2 id=\"chromium-call-stack-profiling-in-windows\" tabindex=\"-1\">Chromium call stack profiling in Windows <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h2>\n<p>Chromium provides support for call stack profiling. This is done at different levels of the stack:</p>\n<ul>\n<li>It allows to build with frame pointers, so CPU profile samplers can properly capture the full call stack.</li>\n<li>v8 can generate symbol information for for JIT-compiled code. This is supported for ETW (and also for Linux Perf).</li>\n</ul>\n<h3 id=\"compilation\" tabindex=\"-1\">Compilation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h3>\n<p>In any platform compilation will usually benefit from compiling with the GN flag <code>enable_profiling=true</code>. This will enable frame pointers support. In Windows, it will also enable generation of function information for code generated by V8.</p>\n<p>Also, <code>symbol_level=1</code> should be added at least, so the compilation stage function names are available.</p>\n<h3 id=\"chrome-startup\" tabindex=\"-1\">Chrome startup <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h3>\n<p>To enable generation of V8 profiling information in Windows, these flags should be passed to chrome on launch:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">chrome --js-flags<span class=\"token operator\">=</span><span class=\"token string\">\"--enable-etw-stack-walking --interpreted-frames-native-stack\"</span></code></pre>\n<p><code>--enable-etw-stack-walking</code> will emit information of the functions compiled by V8 JIT engine, so they can be recorded while sampling the stack.</p>\n<p><code>--interpreted-frames-native-stack</code> will show the frames of interpreted code in the native stack, so external profilers as ETW can properly show those in the profiling samples.</p>\n<h3 id=\"recording\" tabindex=\"-1\">Recording <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h3>\n<p>Then, a session of the workload to analyze can be captured with <a href=\"https://learn.microsoft.com/en-us/windows-hardware/test/wpt/windows-performance-recorder\">Windows Performance Recorder</a>.</p>\n<p>An alternate tool with specific Chromium support, <a href=\"https://github.com/google/UIforETW\">UIForETW</a> can be used too. Main advantage is that it allows to select specific <a href=\"https://www.chromium.org/developers/how-tos/trace-event-profiling-tool/\">Chromium tracer</a> categories, that will be emitted in the same trace. Its author, Bruce Dawson, has <a href=\"https://randomascii.wordpress.com/2015/04/14/uiforetw-windows-performance-made-easier/\">explained very well</a> how to use it.</p>\n<h3 id=\"analysis\" tabindex=\"-1\">Analysis <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h3>\n<p>For analysis, the tool <a href=\"https://learn.microsoft.com/en-us/windows-hardware/test/wpt/windows-performance-analyzer\">Windows Performance Analyzer (WPA)</a> can be used. Both UIForETW and Windows Performance Recorder will offer opening the obtained trace at the end of the capture for analysis.</p>\n<p>Before starting analysis, in WPA, add the paths where the .PDB files with debugging information are available.</p>\n<p>Then, select <strong>Computation/CPU Usage (Sampled)</strong>:</p>\n<p><a href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/images/wpa_cpu_usage_sampled.png\"><img src=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/images/wpa_cpu_usage_sampled.png\" alt=\"\"></a><img src=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/images/wpa_cpu_usage_sampled.png\" alt=\"\">.</p>\n<p>From the available charts, we are interested in the ones providing stackwalk information:</p>\n<p><a href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/images/wpa_cpu_usage_sampled_stack_examples.png\"><img src=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/images/wpa_cpu_usage_sampled_stack_examples.png\" alt=\"\"></a></p>\n<h3 id=\"next\" tabindex=\"-1\">Next <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/29/native-call-stack-profiling-2-3-event-tracing-for-windows-and-chromium/\">#</a></h3>\n<p>In the last post of this series, I will present the work done in 2022 to improve V8 support for Windows ETW.</p>\n",
			"date_published": "2022-11-29T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/",
			"url": "https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/",
			"title": "Native call stack profiling (1/3): introduction",
			"content_html": "<p>This week I presented a lightning talk in <a href=\"https://www.chromium.org/events/blinkon-17/\">BlinkOn 17</a>. There I talked about the work for improving native stack profiling support in Windows.</p>\n<p>This post starts a series where I will provide more context and details to the presentation.</p>\n<h2 id=\"why-callstack-profiling\" tabindex=\"-1\">Why callstack profiling <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">#</a></h2>\n<p>First, a definition:</p>\n<p><strong>Callstack profiling</strong>: a performance analysis tool, that samples periodically the call stacks of all threads, for a specific workload.</p>\n<p>Why is it useful? It provides a better undestanding of performance problems, specially if they are caused by CPU-bound bottle necks.</p>\n<p>As we sample the full stack for each thread, we are capturing a handful of information:</p>\n<ul>\n<li>Which functions are using more CPU directly.</li>\n<li>As we capture the full stacktrace, we know also which functions involve more CPU usage, even if it is indirectly through the calls they do.</li>\n</ul>\n<p>But it is not only useful for CPU waits. It will also capture when a method is waiting for something (i.e. because of networking, or a semaphore).</p>\n<p>The provided information is useful for initial analysis of the problem, as it will give a high level view of where time could be spent by the application. But it will also be useful in further stages of the analysis, and even for comparing different implementations and consider possible changes.</p>\n<h2 id=\"how-does-it-work\" tabindex=\"-1\">How does it work? <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">#</a></h2>\n<p>For call stack sampling, we need some infrastructure to be able to capture and traverse properly the callstack for each thread.</p>\n<p>In compilation stage, information is added for function names and the frame pointers. This allows, for a specific stack, to resolve later the actual names, and even lines of code that are captured.</p>\n<p>In runtime stage, function information will be required for generated code. I.e. in a web browser, the Javascript code that is compiled in runtime.</p>\n<p>Then, every sample will extract the callstack of all the threads of all the analysed processes. This will happen periodically, at the rate established by the profiling tool.</p>\n<h2 id=\"system-wide-native-callstack-profiling\" tabindex=\"-1\">System wide native callstack profiling <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">#</a></h2>\n<p>When possible, sampling the call stacks of the full system can be benefitial for the analysis.</p>\n<p>First, we may want to include system libraries and other dependencies of our component in the analysis. But also, system analyzers can provide other metrics that can give a better context to the analysed workload (network or CPU load, memory usage, swappiness, …).</p>\n<p>In the end, many problems are not bound to a single component, so capturing the interaction with other components can be useful.</p>\n<h2 id=\"next\" tabindex=\"-1\">Next <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2022/11/16/native-call-stack-profiling-1-3-introduction/\">#</a></h2>\n<p>In next blog posts in this series, I will present native stack profiling for Windows, and how it is integrated with Chromium.</p>\n",
			"date_published": "2022-11-16T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/",
			"url": "https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/",
			"title": "3 events in a month",
			"content_html": "<p>As part of my job at Igalia, I have been attending 2-3 events per year. My role mostly as a Chromium stack engineer is not usually much demanding regarding conference trips, but they are quite important as an opportunity to meet collaborators and project mates.</p>\n<p>This month has been a bit different, as I ended up visiting Santa Clara LG Silicon Valley Lab in California, Igalia headquarters in A Coruña, and Dresden. It was mostly because I got involved in the discussions for the web runtime implementation being developed by Igalia for AGL.</p>\n<h1 id=\"agl-f2f-at-lgsvl\" tabindex=\"-1\">AGL f2f at LGSVL <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/\">#</a></h1>\n<p>It is always great to visit <a href=\"http://lgsvl.com\">LG Silicon Valley Lab</a> (Santa Clara, US), where my team is located. I have been participating for 6 years in the development of the <a href=\"http://webosose.org/\">webOS</a> web stack you can most prominently enjoy in LG webOS smart TV.</p>\n<p>One of the goals for next months at AGL is providing an efficient web runtime. In LGSVL we have been developing and maintaining WAM, the webOS web runtime. And as it was released with an open source license in webOS Open Source Edition, it looked like a great match for AGL. So my team did a proof of concept in May and it was succesful. At the same time Igalia has been working on porting Chromium browser to AGL. So, after some discussions AGL approved sponsoring my company, Igalia for porting the LG webOS web runtime to AGL.</p>\n<p>As LGSVL was hosting <a href=\"https://wiki.automotivelinux.org/agl-distro/sep2018-f2f\">the september 2018 AGL f2f meeting</a>, Igalia sponsored my trip to the event.</p>\n<figure>\n<a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/img_4005.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/img_4005-580x273.jpg\"></a>\n<figcaption>AGL f2f Santa Clara 2018, AGL wiki <a href=\"https://creativecommons.org/licenses/by/4.0/\">CC BY 4.0</a></figcaption>\n</figure>\n<p>So we took the opportunity to continue discussions and progress in the development of the WAM AGL port. And, as we expected, it was quite beneficial to unblock tasks like AGL app framework security integration, and the support of AGL latest official release, Funky Flounder. Julie Kim from Igalia attended the event too, and presented <a href=\"https://wiki.automotivelinux.org/_media/agl-distro/the_progress_of_the_chromium_wayland_project.pdf\">an update on the progress of the Ozone Wayland port</a>.</p>\n<p>The organization and the venue were great. Thanks to LGSVL!</p>\n<p><a href=\"http://lgsvl.com\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/lgsvl_logo.png\" alt=\"\"></a></p>\n<h1 id=\"web-engines-hackfest-2018-at-igalia\" tabindex=\"-1\">Web Engines Hackfest 2018 at Igalia <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/\">#</a></h1>\n<p>Next trip was definitely closer. Just 90 minutes drive to our Igalia headquarters in A Coruña.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20170802_113339-e1540197571674.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20170802_113339-e1540197571674-580x320.jpg\" alt=\"\"></a></p>\n<p>Igalia has been organizing this event since 2009. It is a cross-web-engine event, where engineers of Mozilla, Chromium and WebKit have been meeting yearly to do some hacking, and discuss the future of the web.</p>\n<p>This time my main interest was participating in the discussions about the effort by Igalia and Google to support Wayland natively in Chromium. I was pleased to know around 90% of the work had already landed in upstream Chromium. Great news as it will smooth integration of Chromium for embedders using Ozone Wayland, like webOS. It was also great to know the work for improving GPU performance reducing the number of copies required for painting web contents.</p>\n<figure>\n<a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/31222451768_9d85cd2760_o-e1540294341646.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/31222451768_9d85cd2760_o-e1540294341646-580x419.jpg\"></a>\n<figcaption>Web Engines Hackfest 2018 <a href=\"https://creativecommons.org/licenses/by-sa/2.0/\">CC BY-SA 2.0</a></figcaption>\n</figure>\n<p>Other topics of my interest:</p>\n<ul>\n<li>We did a follow-up of the discussion in last BlinkOn about the barriers for Chromium embedders, sharing the experiences maintaining a downstream Chromium tree.</li>\n<li>Joined the discussions about the future of WebKitGTK. In particular the graphics pipeline adaptation to the upcoming GTK+ 4.</li>\n</ul>\n<p>As usual, the organization was great. We had 70 people in the event, and it was awesome to see all the activity in the office, and so many talented engineers in the same place. Thanks Igalia!</p>\n<figure>\n<p><a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/30160192427_e277b24cdc_o.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/30160192427_e277b24cdc_o-580x387.jpg\" alt=\"\"></a></p>\n<figcaption>\n<p>Web Engines Hackfest 2018 <a href=\"https://creativecommons.org/licenses/by-sa/2.0/\">CC BY-SA 2.0</a></p>\n</figcaption>\n</figure>\n<h1 id=\"agl-all-members-meeting-europe-2018-at-dresden\" tabindex=\"-1\">AGL All Members Meeting Europe 2018 at Dresden <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/\">#</a></h1>\n<p>The last event in barely a month was my first visit to the beautiful town of Dresden (Germany).</p>\n<p><a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20181019_150530.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20181019_150530-580x435.jpg\" alt=\"\"></a></p>\n<p>The goal was continuing the discussions for the projects Igalia is developing for AGL platform: Chromium upstream native Wayland support, and the WAM web runtime port. We also had a booth showcasing that work, but also our lightweight <a href=\"https://webkit.org/wpe/\">WebKit port WPE</a> that was, as usual, attracting interest with its 60fps video playback performance in a Raspberry Pi 2.</p>\n<p>I co-presented with Steve Lemke a talk about the automotive activities at LGSVL, taking the opportunity to update on the status of the WAM web runtime work for AGL (<a href=\"https://schd.ws/hosted_files/aglmmeu2018/35/AGL%20AMM%202018%20Dresden%20LG%20WebAppManager.pdf\">slides here</a>). The project is progressing and Igalia should be landing soon the first results of the work.</p>\n<figure>\n<p><a href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20181017_094812-e1540198579290.jpg\"><img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/20181017_094812-e1540198579290-580x630.jpg\" alt=\"\"></a></p>\n<figcaption>Igalia booth at AGL AMM Europe 2018</figcaption>\n</figure>\n<p>It was great to meet all this people, and discuss in person the architecture proposal for the web runtime, unblocking several tasks and offering more detailed planning for next months.</p>\n<p>Dresden was great, and I can’t help highlighting the reception and guided tour in the <a href=\"https://www.verkehrsmuseum-dresden.de/en/\">Dresden Transportation Museum</a>. Great choice by the organization. Thanks to Linux Foundation and the AGL project community!</p>\n<p><a href=\"https://www.automotivelinux.org/\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/logo_agl_horizontal_white.svg\">\n<img src=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/images/agl_logo-e1540295416182.png\" alt=\"AGL logo\">\n</picture></a></p>\n<h1 id=\"next-chrome-dev-summit-2018\" tabindex=\"-1\">Next: Chrome Dev Summit 2018 <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/10/25/3-events-in-a-month/\">#</a></h1>\n<p>So… what’s next? I will be visiting San Francisco in November for <a href=\"https://developer.chrome.com/devsummit/\">Chrome Dev Summit</a>.</p>\n<p>I can only thank Igalia for sponsoring my attendance to these events. They are quite important for keeping things moving forward. But also, it is also really nice to meet friends and collaborators. Thanks Igalia!</p>\n<p><a href=\"https://www.igalia.com\"><picture>\n<source media=\"(prefers-color-scheme: dark)\" srcset=\"https://blogs.igalia.com/dape/img/igalia-logo-white-text.svg\">\n<img src=\"https://blogs.igalia.com/dape/img/igalia_-_500px_-_RGB_-_Feb23-580x210.png\" alt=\"Igalia\">\n</picture></a></p>\n",
			"date_published": "2018-10-25T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/",
			"url": "https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/",
			"title": "Updated Chromium Legacy Wayland Support",
			"content_html": "<h1 id=\"introduction\" tabindex=\"-1\">Introduction <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Future Ozone Wayland backend is still not ready for shipping. So we are announcing the release of an updated Ozone Wayland backend for Chromium, based on the implementation provided by Intel. It is rebased on top of latest stable Chromium release and you can find it in <a href=\"https://github.com/lgsvl/chromium-src\">my team Github</a>. Hope you will appreciate it.</p>\n<h1 id=\"official-chromium-on-linux-desktop-nowadays\" tabindex=\"-1\">Official Chromium on Linux desktop nowadays <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Linux desktop is progressively migrating to use Wayland as the display server. It is the default option in Fedora, Ubuntu <s>and, more importantly, the next Ubuntu Long Term Support release will ship Gnome Shell Wayland display server by default</s> (P.S. since this post was originally written, Ubuntu has delayed the Wayland adoption for LTS).</p>\n<p>As is, now, Chromium browser for Linux desktop support is based on X11. This means it will natively interact with an X server and with its XDG extensions for displaying the contents and receiving user events. But, as said, next generation of Linux desktop will be using Wayland display servers instead of X11. How is it working? Using XWayland server, a full X11 server built on top of Wayland protocol. Ok, but that has an impact on performance. Chromium needs to communicate and paint to X11 provided buffers, and then, those buffers need to be shared with Wayland display server. And the user events will need to be proxied from the Wayland display server through the XWayland server and X11 protocol. It requires more resources: more memory, CPU, and GPU. And it adds more latency to the communication.</p>\n<h1 id=\"ozone\" tabindex=\"-1\">Ozone <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Chromium supports officially several platforms (Windows, Android, Linux desktop, iOS). But it provides abstractions for porting it to other platforms.</p>\n<p>The set of abstractions is named Ozone (<a href=\"https://chromium.googlesource.com/chromium/src/+/lkcr/docs/ozone_overview.md\">more info here</a>). It allows to implement one or more platform components with the hooks for properly integrating with a platform that is in the set of officially supported targets. Among other things it provides abstractions for: * Obtaining accelerated surfaces. * Creating and obtaining windows to paint the contents. * Interacting with the desktop cursor. * Receiving user events. * Interacting with the window manager.</p>\n<h1 id=\"chromium-and-wayland-2014-2016\" tabindex=\"-1\">Chromium and Wayland (2014-2016) <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Even if Wayland was not used on Linux desktop, a bunch of embedded devices have been using Wayland for their display server for quite some time. LG has been shipping a full Wayland experience on the webOS TV products.</p>\n<p>In the last 4 years, Intel has been providing <a href=\"https://github.com/intel/ozone-wayland\">an implementation of Ozone abstractions for Wayland</a>. It was an amazing work that allowed running Chromium browser on top of a Wayland compositor. This backend has been the de facto standard for running Chromium browser on all these Wayland-enabled embedded devices.</p>\n<p>But the development of this implementation has mostly stopped around Chromium 49 (though rebases on top of Chromium 51 and 53 have been provided).</p>\n<h1 id=\"chromium-and-wayland-2018\" tabindex=\"-1\">Chromium and Wayland (2018+) <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Since the end of 2016, <a href=\"https://www.igalia.com/\">Igalia</a> has been involved on several initiatives to allow Chromium to run natively in Wayland. Even if this work is based on the original Ozone Wayland backend by Intel, it is mostly a rewrite and adaptation to the future graphics architecture in Chromium (Viz and Mus).</p>\n<p>This is being developed in the Igalia GitHub, downstream, though it is expected to be landed upstream progressively. Hopefully, at some point in 2018, this new backend will be fully ready for shipping products with it. But we are still not there. <s>Some major missing parts are Wayland TextInput protocol and content shell support</s> (P.S. since this was written, both TextInput and content shell support are working now!).</p>\n<p>More information on these posts from the authors:</p>\n<ul>\n<li><a href=\"https://blogs.igalia.com/tonikitoo/2016/06/14/understanding-chromiums-runtime-ozone-platform-selection/\">June 2016: Understanding Chromium’s runtime ozone platform selection (by Antonio Gomes)</a>.</li>\n<li><a href=\"http://frederic-wang.fr/analysis-of-ozone-wayland.html\">October 2016: Analysis of Ozone Wayland (by Frédéric Wang)</a>.</li>\n<li><a href=\"https://blogs.igalia.com/tonikitoo/2016/11/14/chromium-ozone-wayland-and-beyond/\">November 2016: Chromium, ozone, wayland and beyond (by Antonio Gomes)</a>.</li>\n<li><a href=\"http://frederic-wang.fr/chromium-on-r-car-m3.html\">December 2016: Chromium on R-Car M3 &amp; AGL/Wayland (by Frédéric Wang)</a>.</li>\n<li><a href=\"http://frederic-wang.fr/mus-window-system.html\">February 2017: Mus Window System (by Frédéric Wang)</a>.</li>\n<li><a href=\"https://blogs.igalia.com/tonikitoo/2017/05/17/chromium-musozone-update-h12017-wayland-x11/\">May 2017: Chromium Mus/Ozone update (H1/2017): wayland, x11 (by Antonio Gomes)</a>.</li>\n<li><a href=\"https://blogs.igalia.com/msisov/2017/06/09/running-chromium-m60-on-r-car-m3-board-aglwayland/\">June 2017: Running Chromium m60 on R-Car M3 board &amp; AGL/Wayland (by Maksim Sisov)</a>.</li>\n</ul>\n<h1 id=\"releasing-legacy-ozone-wayland-backend-2017-2018\" tabindex=\"-1\">Releasing legacy Ozone Wayland backend (2017-2018) <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2018/03/21/updated-chromium-legacy-wayland-support/\">#</a></h1>\n<p>Ok, so new Wayland backend is still not ready in some cases, and the old one is unmaintained. For that reason, <a href=\"http://www.lg.com/\">LG</a> is announcing the release of an updated legacy Ozone Wayland backend. It is essentially the original Intel backend, but ported to current Chromium stable.</p>\n<p>Why? Because we want to provide a migration path to the future Ozone Wayland backend. And because we want to share this effort with other developers, willing to run Chromium in Wayland immediately, or that are still using the old backend and cannot immediately migrate to the new one.</p>\n<p><strong>WARNING</strong> <em>If you are starting development for a product that is going to happen in 1-2 years… Very likely your best option is already migrating now to the new Ozone Wayland backend (and help with the missing bits). We will stop maintaining it ourselves once new Ozone Wayland backend lands upstream and covers all our needs.</em></p>\n<p>What does this port include? * Rebased on top of Chromium m60, m61, m62 and m63. * Ported to GN. * It already includes some changes to adapt to the new Ozone Wayland refactors.</p>\n<p>It is hosted at <a href=\"https://github.com/lgsvl/chromium-src\">https://github.com/lgsvl/chromium-src</a>.</p>\n<p>Enjoy it!</p>\n<blockquote>\n<p><strong>Originally published at <a href=\"http://webosose.org/blog/updated-chromium-lagacy-wayland-support/\">webOS Open Source Edition Blog</a>. and licensed under <a href=\"https://creativecommons.org/licenses/by/4.0/\">Creative Commons Attribution 4.0</a>.</strong></p>\n</blockquote>\n",
			"date_published": "2018-03-21T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2012/10/01/webkitgtk-accelerated-composition-on-wayland/",
			"url": "https://blogs.igalia.com/dape/2012/10/01/webkitgtk-accelerated-composition-on-wayland/",
			"title": "WebKitGTK+ accelerated composition on Wayland",
			"content_html": "<p>As part of my work at <a href=\"http://www.igalia.com/webkit/\">Igalia browsers team</a>, I am working on making <a href=\"http://webkitgtk.org\">WebKitGTK+</a> and <a href=\"https://live.gnome.org/Epiphany\">Epiphany</a> work on <a href=\"http://wayland.freedesktop.org/\">Wayland</a>.</p>\n<p>Just running non 3D websites on Wayland did not involve too much work. But running the OpenGL accelerated code in WebKit was a bit more complicated. Though, I’ve got a first working version.</p>\n<figure>\n<p><a href=\"http://www.youtube.com/watch?v=Di9LxCBsYtY&amp;feature=plcp\"><img src=\"https://blogs.igalia.com/dape/2012/10/01/webkitgtk-accelerated-composition-on-wayland/images/hqdefault.jpg\" alt=\"Video: Epiphany on Wayland running WebGL and CSS-3D\"></a></p>\n<figcaption>Epiphany on Wayland running WebGL and CSS-3D</figcaption>\n</figure>\n<p>On WebKitGTK+, we enable the use of hardware acceleration with OpenGL for:</p>\n<ul>\n<li>WebGL: web pages with a canvas using WebGL are run using the 3D hardware available.</li>\n<li>Accelerated composition of layers. With stuff like CSS-3D transformations, 3D hardware acceleration is handy to composite the layers of a webpage.</li>\n</ul>\n<p>You can read more about accelerated compositing on these posts from Martin Robinson: <a href=\"http://blog.abandonedwig.info/2011/12/webkitgtk-hackfest-wrapup-accelerated.html\">WebKitGTK+ hackfest wrapup</a>, and <a href=\"http://blog.abandonedwig.info/2012/07/accelerated-compositing-update.html\">Accelerated compositing update</a>.</p>\n<p>On X11, we use XComposite, sharing a Window among the GTK+ widget (WebKitWebView) and the GL contexts for WebGL and accelerated composition. We have a tree of layers, each one rendering to a texture. Then these textures are composited rendering directly to the X11 window.</p>\n<p>On Wayland, things are a bit different. Wayland protocol does not define a way to share a buffer among clients, nor a way to “insert” a window inside another window. My solution is just making the accelerated compositor render the layers to another texture. When the time comes for the WebKitWebView to be drawn (using Cairo), we render this texture too. If we build GTK+ for using EGL, then this process happens completely on GPU.</p>\n<p>Next step will be adding support for accelerated composition in WebKit2GTK+. The main challenge here is that the WebKitWebView widget is on UI process and the WebGL contexts and layers rendering are in Web Process. So, if we want to avoid buffers going to/from GPU, we need to share them between the two processes. DRM authentication through <a href=\"http://www.khronos.org/registry/egl/extensions/MESA/EGL_MESA_drm_image.txt\">EGL_mesa_drm</a> extension could help here.</p>\n",
			"date_published": "2012-10-01T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2012/07/27/lightning-talk-web-apps-stores-in-epiphany/",
			"url": "https://blogs.igalia.com/dape/2012/07/27/lightning-talk-web-apps-stores-in-epiphany/",
			"title": "Lightning talk: web apps stores in Epiphany",
			"content_html": "<p>I’m just now in a train. Destination: GUADEC 2012!</p>\n<p>Today I’ll be presenting a lightning talk about my work at <a href=\"http://www.igalia.com/nc/work/area/item/browsers/\">Igalia browsers team</a> for web application stores support in Epiphany. Take this talk as a fast follow up on the work I posted about some months ago, “<a href=\"http://blogs.igalia.com/dape/2012/03/16/epiphany-meets-the-web-app-stores/\">Epiphany meets the web app stores</a>”. I will also outline the plan for next months.</p>\n<p>See you at GUACEC!</p>\n<p><a href=\"http://2012.guadec.org\"><img src=\"https://blogs.igalia.com/dape/2012/07/27/lightning-talk-web-apps-stores-in-epiphany/images/going-to-guadec-2012-badge.png\" alt=\"\"></a></p>\n",
			"date_published": "2012-07-27T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2012/05/14/iwkmail-mixing-webkit-gtk-camel-and-jquery-mobile/",
			"url": "https://blogs.igalia.com/dape/2012/05/14/iwkmail-mixing-webkit-gtk-camel-and-jquery-mobile/",
			"title": "IwkMail, mixing WebKit Gtk+, Camel and JQuery Mobile",
			"content_html": "<p>In the last few weeks, as part of my work here at <a href=\"http://www.igalia.com\">Igalia</a>, I’ve been playing a bit with the concept of hybrid applications. In this case, I’ve created a basic prototype of a mail application, with its user interface completely written using JQuery Mobile, and with backend code in C and GObject. The result is <a href=\"https://github.com/jdapena/iwkmail\">iwkmail</a>.</p>\n<figure>\n<p><a href=\"http://www.youtube.com/watch?v=eCrBKKFN92s&amp;feature=youtu.be\"><img src=\"https://blogs.igalia.com/dape/2012/05/14/iwkmail-mixing-webkit-gtk-camel-and-jquery-mobile/images/0.jpg\" alt=\"Screencast of iwkmail in action\"></a></p>\n<figcaption>Screencast of iwkmail in action</figcaption>\n</figure>\n<p>Though it’s a simple experiment, I’ve added some mail basic functionality, so I could try to catch as much as possible of real requirements for how  we could improve the developers WebKit+GNOME experience creating hybrid applications.</p>\n<p>My first conclusion is that it’s <strong>surprisingly easy and fast</strong> to develop such applications. Second, <strong>I could reuse tons of source code</strong> and modules from my old projects. This approach surely provides a way to create cool GNOME applications, using the most fashionable web client technologies.</p>\n<p>So, you’ll get:</p>\n<ul>\n<li>Browsing messages</li>\n<li>Read/unread flags</li>\n<li>Deleting messages</li>\n<li>Creating and deleting mail accounts.</li>\n<li>Storage protocols supported: IMAP and POP.</li>\n<li>For sending mails, we support SMTP. There’s support for an outbox holding the messages to be sent.</li>\n<li>A plain text composer, allowing to add attachments.</li>\n</ul>\n<p>The UI is completely written in Javascript + HTML, using <a href=\"http://jquerymobile.com/\">JQuery Mobile</a>.</p>\n<p>The backend side is done using Camel library inside Evolution Data Server, so we rely on a library well tested for more than 10 years.  All the code related to this is implemented in C+GObject, and I reused a good set of code from <a href=\"http://gitorious.org/modest/modest\">Modest</a>, the default mail client for Nokia N810 and N900. I’ve got involved on its development for 3 years, so that’s a bunch of code I know well enough.</p>\n<p>For communication, I use the AJAX-like JSONP protocol, and custom SoupRequest URI scheme handlers. Basically I expose some methods as iwk:addAcccount, iwk:getMessage, etc, and arguments are passed as usual in a web request. The result I obtain from this calls is a JSON object with the results of the call. Simple, and works very well.</p>\n<p>I’ve pushed the work on github: <a href=\"https://github.com/jdapena/iwkmail\">https://github.com/jdapena/iwkmail</a>. Feel free to try it!</p>\n<p>Oh, I guess it’s very obvious that I did not spend too much time thinking on the project name… So, anyone proposing something that matches the IM acronym (I don’t want to rewrite the class names!) would deserve a beer.</p>\n<p>Last, lots of thanks to <a href=\"http://www.igalia.com\">Igalia</a> for giving me the opportunity to do this experiment. As usual, fun stuff to work with.</p>\n",
			"date_published": "2012-05-14T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2012/03/16/epiphany-meets-the-web-app-stores/",
			"url": "https://blogs.igalia.com/dape/2012/03/16/epiphany-meets-the-web-app-stores/",
			"title": "Epiphany meets the web app stores",
			"content_html": "<p>In last weeks, I’ve been taking a look at the web applications standards support in Epiphany, as part of my work at <a href=\"http://www.igalia.com\">Igalia</a>. <a href=\"http://blogs.gnome.org/xan/2011/08/31/web-application-mode-in-gnome-3-2/\">Xan wrote about the Save as web application</a> feature present in Epiphany 3.2, that is a base for very simple (and userful) web applications support in Gnome desktop.</p>\n<p>To continue with this work, I’ve been investigating on adding support for some web app stores. So I’ve done an experimental implementation for <a href=\"https://developer.mozilla.org/en/Apps/\">Mozilla Open Web Apps</a> (as in 2011 tech preview), Chrome Web Store <a href=\"http://code.google.com/chrome/apps/docs/developers_guide.html\">hosted</a> and <a href=\"http://code.google.com/chrome/extensions/apps.html\">packaged</a> apps, and <a href=\"http://code.google.com/intl/en-US/chrome/apps/docs/no_crx.html\">Chrome CRX-less apps</a>.</p>\n<iframe width=\"560\" height=\"315\" src=\"http://www.youtube.com/embed/1ns6w7B-OLY\" frameborder=\"0\" allowfullscreen=\"\"></iframe>\n<p><a href=\"http://youtu.be/1ns6w7B-OLY\">Screencast using Chrome Web Store</a>.</p>\n<p>This is an <strong>experiment</strong>. Not supported, and it may actually stay out of official Epiphany. So there are lots of things not working at all. This is first a way to have a big number of apps to play with our application mode, and improve it. So no permissions check, URL’s match may be broken, many apps will fail to even log in… Did I say it is an experiment? Most obvious issues are related with this <a href=\"https://bugzilla.gnome.org/show_bug.cgi?id=658395\">external links handling bug</a>.</p>\n<p>But, if you just want to play with it, just try my branch <strong>webapp</strong> in <a href=\"https://github.com/jdapena/epiphany/tree/webapp\">my Epiphany Github repository</a>. By default, support is disabled, so you’ll have to enable these keys: <code>$ gsettings set org.gnome.Epiphany.web enable-chrome-apps true $ gsettings set org.gnome.Epiphany.web enable-open-web-apps true</code></p>\n<p>You can try with <a href=\"https://apps.mozillalabs.com/appdir/\">Mozilla Labs Apps Dir from 2011 tech preview</a> and <a href=\"https://chrome.google.com/webstore/category/home\">Chrome Web Store</a>.</p>\n",
			"date_published": "2012-03-16T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/07/27/talk-about-modest-4-for-guadec-next-thursday-challenges-of-portability-between-hildon-and-gnome/",
			"url": "https://blogs.igalia.com/dape/2010/07/27/talk-about-modest-4-for-guadec-next-thursday-challenges-of-portability-between-hildon-and-gnome/",
			"title": "Talk about Modest 4 for Guadec next Thursday. Challenges of portability between Hildon and GNOME.",
			"content_html": "<p>Tomorrow I’m leaving to GUADEC 2010.  I’m goint to assist only on Thursday this time, when I’ll be doing this year GUADEC talk about Modest project.</p>\n<p>This time the talk focus will be completely different, as I’ll be explaining the process towards Modest 4, where we’re focusing in intensive refactoring, with the goal of releasing a product quality in GNOME, Moblin and Hildon/Maemo5 platforms.</p>\n<p>Also, I’ll talk about some differences between Maemo and GNOME platforms, and some bits I miss in GNOME platform:</p>\n<ul>\n<li>IP hearbeat (data transferences done in bursts to save energy).</li>\n<li>libosso-abook (evolution data server addressbook and telepathy integration).</li>\n<li>libalarm/alarmd (events scheduler integrated with dbus, and with support for waking up device).</li>\n<li>… etc, etc.</li>\n</ul>\n<p>I won’t ellaborate too much, but I’m trying to point some weak points in GNOME platform we could improve (just taking free software Maemo components, or improving GNOME platform components).</p>\n<p>The talk will be on Thursday, at 14:45 in Seville room.</p>\n",
			"date_published": "2010-07-27T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/04/12/clutter-grilo-player-0-1-1/",
			"url": "https://blogs.igalia.com/dape/2010/04/12/clutter-grilo-player-0-1-1/",
			"title": "Clutter Grilo Player 0.1.1",
			"content_html": "<p>This week I’ve been working a bit more in the Clutter Grilo Player. And finally did release 0.1.1:</p>\n<ul>\n<li>Fullscreen button.</li>\n<li>Keyboard shortcuts.</li>\n<li>Volume control.</li>\n<li>Now we sort search results.</li>\n<li>Translation support.</li>\n<li>Style fixes (no more ugly red buttons in media library).</li>\n<li>Speedup in YouTube access.</li>\n</ul>\n<p><a href=\"https://blogs.igalia.com/dape/2010/04/12/clutter-grilo-player-0-1-1/images/cgp-0.1.1-shot.png\"><img src=\"https://blogs.igalia.com/dape/2010/04/12/clutter-grilo-player-0-1-1/images/cgp-0.1.1-shot-300x226.png\" alt=\"\"></a></p>\n<p>Thanks to Chris Lord for his patches in clutter code, and Iago Toral for his help improving YouTube speed.</p>\n<p>As usual, I uploaded the packages to <a href=\"https://launchpad.net/~jdapena/+archive/clutter-grilo-player\">CGP Launchpad PPA</a>. The code, in <a href=\"http://gitorious.org/clutter-grilo-player/\">CGP  gitorious</a>.</p>\n",
			"date_published": "2010-04-12T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/04/05/first-release-of-clutter-grilo-player/",
			"url": "https://blogs.igalia.com/dape/2010/04/05/first-release-of-clutter-grilo-player/",
			"title": "First release of Clutter Grilo Player",
			"content_html": "<p>Last weeks I’ve been playing with <a href=\"http://clutter-project.org\">Clutter</a> and <a href=\"http://git.moblin.org/cgit.cgi/mx/\">Mx</a> libraries, with the idea of knowing them deeper while also trying to help a bit. Honestly, I believe that the best way to learn about such things is just creating something using them.</p>\n<p>So, knowing the fantastic effort done by Grilo team at Igalia, to create a framework for accessing different multimedia sources across the internet, I came to the idea of creating a very simple media player that uses MX widgets, Gstreamer, and <a href=\"http://live.gnome.org/Grilo\">Grilo framework</a>.</p>\n<p>And here it is, the first release of <a href=\"http://gitorious.org/clutter-grilo-player\">Clutter Grilo Player</a>. It’s still dirty, but the general idea of the interface is there. It supports browsing and searching some Grilo providers, including <a href=\"http://www.youtube.com\">Youtube</a> and others.</p>\n<p><a href=\"https://blogs.igalia.com/dape/2010/04/05/first-release-of-clutter-grilo-player/images/Pantallazo-Clutter-Grilo-Player.png\"><img src=\"https://blogs.igalia.com/dape/2010/04/05/first-release-of-clutter-grilo-player/images/Pantallazo-Clutter-Grilo-Player-300x226.png\" alt=\"\"></a></p>\n<p>Some links:</p>\n<ul>\n<li>Source code is in gitorious: <a href=\"http://gitorious.org/clutter-grilo-player/\">http://gitorious.org/clutter-grilo-player</a></li>\n<li>I created a ubuntu PPA with Lucid packages (including MX 0.99.2, clutter-gst 1.0.0 and clutter-gestures), so anyone can play with it a bit. The ppa is <a href=\"https://launchpad.net/~jdapena/+archive/clutter-grilo-player\">ppa:jdapena/clutter-grilo-player</a></li>\n</ul>\n",
			"date_published": "2010-04-05T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/03/15/tinymail-moved-to-gitorious/",
			"url": "https://blogs.igalia.com/dape/2010/03/15/tinymail-moved-to-gitorious/",
			"title": "Tinymail moved to gitorious.",
			"content_html": "<p>After some migration work, now we have Tinymail repository completely migrated to <a href=\"http://gitorious.org\">gitorious.org</a>:</p>\n<p><a href=\"http://gitorious.org/tinymail\">http://gitorious.org/tinymail</a></p>\n<p>I’ve rescued all the branches available in our svn and tried to keep the proper authorship attributions.</p>\n<p>So, from now on, the development should happen in gitorious, and, if you want to keep updated with the latest changes, this is the source to get the information.</p>\n<p>I’ve also updated as much as possible the tinymail wiki with proper references to the gitorious.</p>\n<p>I know I’ve just announced moving modest to gitorious, but modest was already in git. This time the change in tinymail is bigger as we’re also moving to git from svn! Big change, bigger benefits.</p>\n",
			"date_published": "2010-03-15T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/02/26/modest-mail-now-in-gitorious-org/",
			"url": "https://blogs.igalia.com/dape/2010/02/26/modest-mail-now-in-gitorious-org/",
			"title": "Modest mail, now in gitorious.org",
			"content_html": "<p>This week we’ve finally moved Modest to gitorious:</p>\n<p><a href=\"http://gitorious.org\">http://gitorious.org/modest/</a></p>\n<p>The repository itself is called modest: <a href=\"http://gitorious.org/modest/modest\">http://gitorious.org/modest/modest/</a> <a href=\"https://gitorious.org/dape/modest/modest.git\">git://gitorious.org/modest/modest.git</a></p>\n<p>Reasons are basically that gitorious is faster and better providing git services. So I hope the change is for good.</p>\n<p>All the other services will still be in garage: mailing lists, wiki, and project web.</p>\n<h3 id=\"implementation-guide\" tabindex=\"-1\">Implementation guide <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2010/02/26/modest-mail-now-in-gitorious-org/\">#</a></h3>\n<p>Last weeks we’ve also been writing some information about how Modest has been implemented, in the wiki. You can find them in <a href=\"https://garage.maemo.org/plugins/wiki/index.php?ModestArchitecture&amp;id=9&amp;type=g\">Modest architecture documentation</a>. There you’ll find:</p>\n<ul>\n<li>Description of the classes in Modest implementation, and how they work.</li>\n<li>Sequence of events that implement some complex use cases.</li>\n</ul>\n",
			"date_published": "2010-02-26T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/",
			"url": "https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/",
			"title": "Launchpad PPA for Modest 3.90 series (Gnome&amp;amp;Moblin port)",
			"content_html": "<h3 id=\"gnome-moblin-modest\" tabindex=\"-1\">Gnome/Moblin Modest <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/\">#</a></h3>\n<p>As I told in previous post, we’re developing actively a port of Modest for Gnome and Moblin, trying to keep the user experience we created in Maemo Fremantle releases.</p>\n<p>While the goal is having something working for Moblin, Modest way of handling mails (kept simple, and fast to browse) is something you may definitely want to try in y our desktop.</p>\n<h3 id=\"launchpad-ppa-for-modest\" tabindex=\"-1\">Launchpad PPA for Modest <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/\">#</a></h3>\n<p>So, last weeks we’ve started to create packages of Modest for Ubuntu Karmic using the unstable development for Gnome&amp;Moblin. For this we’re using Launchpad PPA. You can get packages for Karmic here:</p>\n<p><a href=\"https://launchpad.net/~jdapena/+archive/modest/\">https://launchpad.net/~jdapena/+archive/modest/</a></p>\n<p>Of course, we would be glad to prepare releases for other distributions. Just ask.</p>\n<h3 id=\"modest-3-90-x-vs-3-2-x\" tabindex=\"-1\">Modest 3.90.x vs 3.2.x <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/\">#</a></h3>\n<p>If you check the <a href=\"http://git.maemo.org/git/modest/?p=modest;a=summary\">Modest git repository</a> you’ll see that we’re actively developing in two branches: master and modest-3-2:</p>\n<ul>\n<li>modest-3-2 branch is targetted for Fremantle releases, and is also the stable release path. If you want to install new releases of Modest in your N900 this is the branch you should use. The releases are happening often, and are numbered in 3.2.x series.</li>\n<li>master branch is the unstable work. The main change happening here is the creation of a Gnome&amp;Moblin version of Modest, without hildon/maemo dependencies. This is the branch used for Launchpad PPA releases.</li>\n</ul>\n<h3 id=\"so-modest-release-3-90-4\" tabindex=\"-1\">So, Modest release 3.90.4 <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2010/01/24/launchpad-ppa-for-modest-3-90-series-gnome-and-amp-moblin-port/\">#</a></h3>\n<p>Today I’ve prepared release 3.90.4 of Modest. The main new feature is that it includes support for handling calendar invitation requests in plugins. So a protocol plugin would be able to handle the calendar invitations and add accept/tentative/decline buttons. Also some bugfixes.</p>\n<p>In next hours this should be available in Modest PPA.</p>\n",
			"date_published": "2010-01-24T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2009/12/11/whats-going-on-in-modest-new-gnomemoblin-port/",
			"url": "https://blogs.igalia.com/dape/2009/12/11/whats-going-on-in-modest-new-gnomemoblin-port/",
			"title": "What&#39;s going on in Modest: new Gnome / Moblin port.",
			"content_html": "<p>As many of you know, Modest is the mail client of Nokia n810 and n900 devices. As that, there is a huge effort on it, to make a really good mail experience in those devices. But for last years, the effort was completely concentrated on Maemo platform.</p>\n<p>Last months, Sergio Villar and me have been working on bringing the Modest user experience to both Gnome and Moblin, using our community projects time here at Igalia. The work was based on a very interesting effort from Javier Jardon this summer.</p>\n<p>The main goal was trying to make the behavior of Modest in Gnome as similar as possible to the counterpart in Fremantle/Maemo5. It’s still unstable, a work in progress, but it’s already showing how it will look like:</p>\n<p><img src=\"https://blogs.igalia.com/dape/2009/12/11/whats-going-on-in-modest-new-gnomemoblin-port/images/Modest4GnomeFoldersView.png\" alt=\"List of folders in Modest Gnome\"></p>\n<p>You’ll see a really big difference with other mail clients available in desktop. It’s really oriented to keep things really simple and straightforward, so Modest is not only light, but its user experience is kept light too following a similar style to the one used in Fremantle Modest.</p>\n<p>Most use cases are already functional. We’ll try to do a new release next week, and, if possible, also offer some packages for easy testing. Stay tuned.</p>\n",
			"date_published": "2009-12-11T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2009/07/09/modest-talk-at-gran-canaria-desktop-summit-slides-and-screencast/",
			"url": "https://blogs.igalia.com/dape/2009/07/09/modest-talk-at-gran-canaria-desktop-summit-slides-and-screencast/",
			"title": "Modest talk at Gran Canaria Desktop Summit. Slides and screencast",
			"content_html": "<p>On monday we (Sergio and me) had the talk about <a href=\"http://modest.garage.maemo.org\">Modest</a> for Fremantle in Gran Canaria Desktop Summit. Good to see some people interested in our work.</p>\n<p>Just Modest is getting a really pretty good shape. Definitely, it’s really cool what we could do with Hildon 2.2/Gtk toolkit. We moved to a really great UI in a few weeks! Also, lots of bugfixes, so Modest is far more reliable now, and also more easy to use than ever.</p>\n<p>The slides: <a href=\"https://blogs.igalia.com/dape/2009/07/09/modest-talk-at-gran-canaria-desktop-summit-slides-and-screencast/images/modest-guadec2009.pdf\">Modest talk at Guadec/Desktop Summit 2009</a></p>\n<p>And the  <a href=\"https://blogs.igalia.com/dape/2009/07/09/modest-talk-at-gran-canaria-desktop-summit-slides-and-screencast/images/modest-fremantle-guadec.ogv\" title=\"screencast of Modest running in Maemo5/Fremantle SDK\">screencast of Modest in Maemo5/Fremantle SDK</a></p>\n",
			"date_published": "2009-07-09T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2009/04/29/new-modest-plugin-system-anyone-willing-to-implement-rss-support/",
			"url": "https://blogs.igalia.com/dape/2009/04/29/new-modest-plugin-system-anyone-willing-to-implement-rss-support/",
			"title": "New Modest plugin system. Anyone willing to implement RSS support?",
			"content_html": "<p>As <a href=\"http://maemo.org/news/announcements/maemo_5_beta_sdk_out/\">Quim announced yesterday,</a> the Beta release of Maemo5 SDK has been released.</p>\n<p>It includes our beloved Modest, which goes back to the opensource development. It’s been migrated to GIT, and you can see and track the development of the project from now on, but also browse the history of the project. See more about all of this in <a href=\"http://garage.maemo.org/projects/modest\">Modest Garage project webpage</a>.</p>\n<p>One very interesting new feature you’ll get with Modest is the new plugin system. The protocols code has been refactored, to allow extending Modest to support new protocols, in addition to the ones supported in standard Modest (IMAP, SMTP and POP3). And this has been exposed through a plugin API.</p>\n<p>What you can do with this is adding plugins adding support in Modest for RSS, NNTP, etc. Just imagine the kind of mail server you would like to see in Modest!</p>\n<p>We’ve added to the new created <a href=\"https://garage.maemo.org/plugins/wiki/index.php?id=9&amp;type=g\">Modest development wiki</a> some information about <a href=\"https://garage.maemo.org/plugins/wiki/index.php?PluginAPI&amp;id=9&amp;type=g\">Modest Plugin development</a>.</p>\n<p>One final note. This plugin API is still not assuming an API/ABI stability, but this is our goal. Bad thing is you’ll need to keep your development in sync with Modest (at least for now). Good thing is that you can still contribute or request changes to the plugin API, and this won’t be a hell.</p>\n",
			"date_published": "2009-04-29T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2008/07/24/gtk-3-0-and-beyond-team-requirements/",
			"url": "https://blogs.igalia.com/dape/2008/07/24/gtk-3-0-and-beyond-team-requirements/",
			"title": "Gtk 3.0 and beyond. Team requirements",
			"content_html": "<p>The 3.0 approach of “no new functionality”, only wiping out weird stuff is good. But I have some concerns on the timing for the plan. If 3.0 is simply wiping old stuff out then, why should we wait to next Spring to finish this? Or, once we have it stabilised, why 1 more year of development to get new features? The total gap of 2 years is reeeeeeally long. Can we go faster? I see <strong>the community has the will</strong> to make Gtk better, soon, but the problem seems to be that the community doesn’t have resources for this. So, in parallel with the implementation plan for Gtk 3.0, we may think about a organization plans for 3.x or for 2010. Do we want to make Gtk grow faster, better? <strong>Is current Gtk+ core team big enough</strong> for what we need?</p>\n<p>Currently the list of core developers in Gtk+ as you can seen in the web page has 10 members. A goal would be something like this: let’s have 20 developers that deserve been in that list in 2010.</p>\n<p>But getting people trained and productive so they deserve getting a core responsibility is hard and slow. Do we want Gtk+ grow healthier, faster, safer, with more quality? Don’t we feel that the current  core team members are heroes that can miraclously maintain and grow Gtk+ because they are really good doing their work? Can we help them? Any effort on growing the core team will take a while, so we should take this seriosly soon if we want results in a reasonable schedule.</p>\n",
			"date_published": "2008-07-24T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/12/17/modest-time-for-feedback/",
			"url": "https://blogs.igalia.com/dape/2007/12/17/modest-time-for-feedback/",
			"title": "Modest! Time for feedback",
			"content_html": "<p>It’s been a lot of work. But <a href=\"http://www.maemoapps.com/2007/12/11/modest-beta-arrives/\">Modest first beta is finally here</a>. For Chinook/OS2008 users, it’s easily available:</p>\n<p><a href=\"http://modest.garage.maemo.org/repos/modest-chinook.install\" title=\"Click here to install modest in chinook\"><img src=\"https://blogs.igalia.com/dape/2007/12/17/modest-time-for-feedback/images/install_button_small.png\" alt=\"Direct install icon\"></a></p>\n<p>You can read more details on the release in the blogs of other Modest developers: <a href=\"http://djcbflux.blogspot.com/2007/12/take-chance-on-me.html\">Dirk-Jan</a>, <a href=\"http://pvanhoof.be/blog/index.php/2007/11/23/sometimes-i-dont-need-a-lot-of-words\">Philip</a>, and my workmates at Igalia <a href=\"http://blogs.igalia.com/svillar/2007/12/12/be-modest-my-friend/\">Sergio</a>, and <a href=\"http://blogs.igalia.com/berto/2007/12/14/its-been-a-hard-days-night/\">Berto</a>.</p>\n<p>But I want to highlight here that Modest <strong>is</strong> Beta. Not in the sense of “huh, don’t blame us if it fails”, but the opposite. Just <strong>blame us</strong>, <strong>cry</strong>, <strong>shout</strong>. We want to make sure Modest is better and better, and we just need you write <a href=\"https://bugs.maemo.org/page.cgi?id=bug-writing.html\">good bug reports</a>, so that we can work on them.</p>\n<p><a href=\"http://commons.wikimedia.org/wiki/Image:Bug_aggregation.jpg\"><img src=\"https://blogs.igalia.com/dape/2007/12/17/modest-time-for-feedback/images/bug_aggregation.jpg\" alt=\"Bug aggregation\"></a></p>\n<p>Bug reports should go to <a href=\"https://bugs.maemo.org/\">Maemo Bugzilla</a>. Section <em>Communications/Modest</em>. We’re willing to know your problems, and/or enhancement requests.</p>\n<p>See you in bugzilla!</p>\n",
			"date_published": "2007-12-17T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/09/25/some-ideas-of-improvements-for-file-chooser/",
			"url": "https://blogs.igalia.com/dape/2007/09/25/some-ideas-of-improvements-for-file-chooser/",
			"title": "Some ideas of improvements for file chooser",
			"content_html": "<p>I’m not saying that file chooser is bad at all for selecting files, but there are two features I would definitely love in it:</p>\n<ul>\n<li>Being able to get the size of the file.</li>\n<li>Thumbnailing support</li>\n</ul>\n<h3 id=\"size-of-the-file\" tabindex=\"-1\">Size of the file <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/25/some-ideas-of-improvements-for-file-chooser/\">#</a></h3>\n<p>The use case I found where I need this is “adding attachments in evolution”. If I want to send a mail, I usually want to avoid sending very big files as this is not very good for most mail accounts people use.</p>\n<p>If I want to do this with Evolution/Gnome, I have to open the folder with Nautilus to check the size of the files.</p>\n<p>In fact, you can find other interesting information you’ll want to check when you are opening a file with file chooser. For example, the id3 tags of a mp3/ogg file.</p>\n<p>I suppose the UI design experts can point good ideas about this. For me it would be enough if the contextual menu of a file would offer a “Properties” action.</p>\n<h3 id=\"thumbnailing\" tabindex=\"-1\">Thumbnailing <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/25/some-ideas-of-improvements-for-file-chooser/\">#</a></h3>\n<p>We don’t need thumbnailing only for opening a file from gimp or to attach an image. The thumbnail of a image works better than the file name to describe the contents of a file, so we should enable the user to use it to know which file he’s managing, even when the application is not a imaging related one. So I suppose it should enable thumbnails by default.</p>\n<h3 id=\"and-a-last-very-easy-to-implement-idea\" tabindex=\"-1\">And a last very easy to implement idea <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/25/some-ideas-of-improvements-for-file-chooser/\">#</a></h3>\n<p>Just provide a way to open in nautilus a folder you’re viewing in file chooser. Not sure how, as a “open folder” button would lead to confussion. Maybe “browse here”? This way you could easily access to all the operations available in nautilus for files and folder, without having to browse to a folder two times (one in file chooser and one in nautilus).</p>\n",
			"date_published": "2007-09-25T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/",
			"url": "https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/",
			"title": "Purging attachments with Tinymail (or being smart with small storages)",
			"content_html": "<h3 id=\"scenery\" tabindex=\"-1\">Scenery <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/\">#</a></h3>\n<p>Imagine you have a pretty mobile or portable device with email access. You love using it for reading mail just because you don’t have to use your laptop or pc, and your fantastic device can go in your pocket or handbag.</p>\n<p>This device is fantastic, yes, but it has a limited storage capability. And you usually get a lot of mails, and some of them really big (with images, audio files or even videos). It’s not too difficult to exhaust the storage in some days or even hours.</p>\n<h3 id=\"tinymail-purge-api\" tabindex=\"-1\">Tinymail purge API <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/\">#</a></h3>\n<p>I added an API to Tinymail that lets you remove attachments stored locally (remove from cache in case the message is from a remote folder, or remove permanently if the folder is stored locally). The API methods are these:</p>\n<ul>\n<li><code>tny_mime_part_set_purged (TnyMimePart *part)</code>: this method marks a mime part to be purged.</li>\n<li><code>tny_msg_rewrite_cache (TnyMsg *msg)</code>: this method rewrites the message to local storage (cache or not), but removing the attachments marked as purged.</li>\n</ul>\n<p>Then, if you want to remove some attachments from a message, you only have to mark them as purged using the <code>set_purged</code> method, and then, make the change persistent with the <code>rewrite_cache</code> method.</p>\n<p>Currently this is only available for IMAP and local MAILDIR provided folders, but it shouldn’t be too difficult to add support for this to other providers.</p>\n<h3 id=\"implementation\" tabindex=\"-1\">Implementation <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/\">#</a></h3>\n<p>The inners of the purge method are really simple. The only thing set_purge method does is setting the <code>Disposition</code> field of the mime part with the value <code>purged</code>. In camel backend:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token function\">camel_mime_part_set_disposition</span> <span class=\"token punctuation\">(</span>priv<span class=\"token operator\">-></span>part<span class=\"token punctuation\">,</span> <span class=\"token string\">\"purged\"</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">;</span></code></pre>\n<p>This mark is mostly innocent, and the implementation is the same for any mime part, independently of the provider (IMAP, maildir or whatever).</p>\n<p>Then <code>rewrite_cache</code> simply rewrites all the message to the local representation of the message, but ignoring the contents of the mime parts marked as purged this way. What we get this way is a reduction in the local storage (we don’t expunge the header of the mime part, only the contents). We also keep more or less the same structure of the message, as the original mime part headers are still stored, even when they get empty contents.</p>\n<h3 id=\"conclusion\" tabindex=\"-1\">Conclusion <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/09/19/purging-attachments-with-tinymail-or-being-smart-with-small-storages/\">#</a></h3>\n<p>The Purge API gives you more freedom to choose what you want to keep in your storage. You can not only wipe out full messages, but also wipe out specific attachments you don’t want to keep.</p>\n",
			"date_published": "2007-09-19T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/03/01/gnome-buildbot-released-at-fosdem/",
			"url": "https://blogs.igalia.com/dape/2007/03/01/gnome-buildbot-released-at-fosdem/",
			"title": "Gnome buildbot released at FOSDEM!",
			"content_html": "<p>Yes, as expected, <a href=\"http://blogs.igalia.com/itoral/\">Iago Toral</a> published the work in Gnome Buildbot at FOSDEM. You can <a href=\"http://blogs.igalia.com/itoral/?p=38\">read his announcement</a> in the blog.</p>\n<p>Briefly, if you maintain a project in Gnome, and want automatic tests running, then <a href=\"https://buildbot-gnome.igalia.com/\">Gnome Buildbot</a> may interest you. And, if you have a project built using jhbuild, then it’s easy to create a Jhbuild based buildbot as the one for Gnome. Just ask in #build-brigade.</p>\n",
			"date_published": "2007-03-01T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/02/23/tomorrow-talk-about-build-brigade-in-fosdem/",
			"url": "https://blogs.igalia.com/dape/2007/02/23/tomorrow-talk-about-build-brigade-in-fosdem/",
			"title": "Tomorrow, talk about Build Brigade in FOSDEM",
			"content_html": "<p>This Saturday, <a href=\"http://blogs.igalia.com/dape\">Iago Toral</a> will be presenting <a href=\"http://live.gnome.org/BuildBrigade\">Build Brigade</a> at <a href=\"http://www.fosdem.org\">FOSDEM</a>. He’ll be presenting some work that we’ve been doing last months, including the Gnome Buildbot. More information in FOSDEM web and in his blog:</p>\n<ul>\n<li><a href=\"http://fosdem.org/2007/schedule/devroom/gnome\">Talk announcement in FOSDEM web</a></li>\n<li><a href=\"http://blogs.igalia.com/itoral/?p=37\">Announcement in his blog</a></li>\n</ul>\n<p>I talked previously about Gnome Buildbot. Briefly, it’s a Continuous Integration based in <a href=\"http://buildbot.net\">Buildbot</a> service that uses <a href=\"http://www.jamesh.id.au/software/jhbuild/\">jhbuild</a> to compile the full Gnome moduleset. It runs tests and returns coverage reports for many modules.</p>\n",
			"date_published": "2007-02-23T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/02/17/wikipatterns/",
			"url": "https://blogs.igalia.com/dape/2007/02/17/wikipatterns/",
			"title": "WikiPatterns",
			"content_html": "<p>Via <a href=\"http://agiletesting.blogspot.com/2007/02/wikipatterns.html\">Agile testing blog</a> I knew about <a href=\"http://wikipatterns.com\">Wikipatterns</a>, a repository of patterns and antipatterns about wiki adoption and usage. Very interesting work.</p>\n<p>I would highlight some interesting entries:</p>\n<ul>\n<li><a href=\"http://www.wikipatterns.com/display/wikipatterns/Overrun+by+Camels\">Overrun by Camels antipattern</a>: avoid using CamelCaseNames, and use real language phrases (for example, name your node as “Node with contents” instead of “NodeWithContents”. This makes the wiki easier to read, and mainly, easier to search in.</li>\n<li><a href=\"http://www.wikipatterns.com/display/wikipatterns/Poker\">Poker pattern</a>: put even trivial things in wiki, to show the null barrier to create new content. Trivial things as this: <em>the scores of the frequent, after-hours poker tournaments</em>.</li>\n<li><a href=\"http://www.wikipatterns.com/display/wikipatterns/Magnet\">Magnet pattern</a>: have some content exclusively in the wiki to force people to use it.</li>\n</ul>\n<p>In <a href=\"http://www.igalia.com\">Igalia</a> we’ve been using wikis+mailing lists for internal communication for a long time. In <a href=\"http://www.gnome.org\">Gnome</a> we’ve got <a href=\"http://live.gnome.org\">Live wiki</a>. And many other communities have found wikis a powerful resource for communication and documentation. Good reading for all these groups.</p>\n",
			"date_published": "2007-02-17T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/02/06/integration-of-unit-tests-in-a-gnome-buildbot/",
			"url": "https://blogs.igalia.com/dape/2007/02/06/integration-of-unit-tests-in-a-gnome-buildbot/",
			"title": "Integration of unit tests in a Gnome Buildbot",
			"content_html": "<p>Today I’ve read about the great work of <a href=\"http://blogs.igalia.com/itoral\">Iago Toral</a> (a workmate at <a href=\"http://www.igalia.com/innovation/\">Igalia Innovation</a>) with Gnome Buildbot. He’s been doing some efforts to integrate unit tests of Gtk+ in the build loop.</p>\n<p>Using <a href=\"http://check.sf.net\">check</a>, he’s generated wonderful HTML reports, and a nice integration with the standard buildbot compilation view:</p>\n<p><img src=\"https://blogs.igalia.com/dape/2007/02/06/integration-of-unit-tests-in-a-gnome-buildbot/images/testsdetail.png\" alt=\"Gtk+ test details in Gnome Buildbot\"></p>\n<p>You can read more information about <a href=\"http://blogs.igalia.com/itoral/?p=29\">Iago’s work in his blog entry “Gnome buildbot and integration of unit tests”</a>. Hopefully we’ll get a public deployment of Gnome Buildbot soon.</p>\n",
			"date_published": "2007-02-06T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/01/29/snes-emulators-in-touchscreens-loving-igalia-and-lovely-events/",
			"url": "https://blogs.igalia.com/dape/2007/01/29/snes-emulators-in-touchscreens-loving-igalia-and-lovely-events/",
			"title": "Snes emulators in touchscreens, loving Igalia and lovely events",
			"content_html": "<h3 id=\"igalia-assembly-or-why-i-love-working-here\" tabindex=\"-1\">Igalia Assembly… Or why I love working here… <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/01/29/snes-emulators-in-touchscreens-loving-igalia-and-lovely-events/\">#</a></h3>\n<p>As many of you know, I am a worker, and one of the initial partners of <a href=\"http://www.igalia.com\">Igalia</a>. This company is organized following an assembleary model. Every partner has the same rights and power of decission. And we want all workers to become partners in a medium term.</p>\n<p>The main representation of this <a href=\"http://www.igalia.com/igalia/howweare/cooperativemodel/\">assembleary model</a> is the Assembly itself. It’s a periodic meeting (one each two months) with all the partners discussing and taking the strategic choices of the company. We’ve held 43 assemblies in the history of Igalia, the first one in September 2001.</p>\n<p>No bosses at all, just all of us being partners, with the same responsibility. This is fair, and one of the better things of being a member of Igalia team. And this makes everybody contribute with their best efforts and ideas.That’s one of the main reasons I <strong>love</strong> working here. And other main reason may be how much extraordinary and good people my partners are.</p>\n<h3 id=\"guademy-cfh-call-for-hacking\" tabindex=\"-1\">Guademy CFH (Call for Hacking!) <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/01/29/snes-emulators-in-touchscreens-loving-igalia-and-lovely-events/\">#</a></h3>\n<p>This is a very interesting event that will happen this year. It stands for <em>Guadec + aKademy = <strong>Gua</strong>demy <strong>de</strong>stroy <strong>my</strong>ths</em>. As the event web states, <a href=\"http://www.guademy.org\">Guademy</a> is a meeting where GNOME and KDE developers share working sessions with other people interested in collaborating with both projects. It will be in March in A Coruña (Galicia - Spain), and is being organized by <a href=\"http://www.gpul.org\">GPUL</a>. Good idea and best wishes!</p>\n<h3 id=\"snes-emulators-and-touch-screens\" tabindex=\"-1\">Snes emulators and touch screens <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2007/01/29/snes-emulators-in-touchscreens-loving-igalia-and-lovely-events/\">#</a></h3>\n<p>Last week I’ve been playing a bit with some emulators, in order to try to get one of them running in Maemo. The one I’ve been testing more deeply is <a href=\"http://www.snes9x.com\">Snes9x</a>. It compiles without too much problems, and it should be easy to make it run a confortable UI with some hildon work.</p>\n<p>But there’s a problem. Nokia N800 and 770 haven’t got too many hardware buttons, and a good Super NES emulator should get 6 fire buttons plus start and select buttons. My idea would be add support for showing buttons in the screen replacing some hard buttons in the screen, in order to use the touch screen for providing 4 fire buttons. Unfortunately, I suppose the main drawback of this idea is that AFAIK the touchscreen of these devices can’t handle more than one “touch” simultaneously :(. Any idea about this?</p>\n<p>The other problem will probably be hardware performance, but I didn’t test the emulator in the device yet.</p>\n",
			"date_published": "2007-01-29T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2007/01/16/new-year-new-life-and-new-city/",
			"url": "https://blogs.igalia.com/dape/2007/01/16/new-year-new-life-and-new-city/",
			"title": "New year, new life (and new city)",
			"content_html": "<p>Long time since last post. Lots of news had happened, so I will not bore you with lots of them. Anyway, some interesting remarks:</p>\n<ul>\n<li>Here at <a href=\"http://www.igalia.com\">Igalia</a> we’ve been doing lots of interesting things last months. A very interesting project is our <a href=\"http://www.igalia.com/news/news_details/?tx_ttnews%5Btt_news%5D=16&amp;tx_ttnews%5BbackPid%5D=3&amp;cHash=147ebaf1f8\">Gnome Handhelds project</a>, which has been awarded with public founding from Xunta de Galicia.</li>\n<li><a href=\"http://www.nokia.com\">Nokia</a> has released a new device based in <a href=\"http://www.maemo.org\">Maemo platform</a>: the <a href=\"http://www.nseries.com/products/n800/\">n800</a>. FIC is releasing soon the Neo1973, with the <a href=\"http://www.openmono.com\">OpenMoko</a>. Very promising devices. I don’t know yet about the OpenMoko platform, but it’s based in Gtk. About the Maemo platform, <a href=\"http://blogs.igalia.com/dape/2006/11/10/playing-with-maemo/\">I’ve been playing with the bleeding edge distribution</a> and it’s really interesting. The kind of toys I would love to receive as a present. Congratulations to Nokia and FIC!</li>\n<li><a href=\"http://www.igalia.com/igalia/workwithus/behired/\">We’re hiring</a>! We’re looking for smart hackers, willing to work in free software projects. Galicia is a lovely place to live, and much better when you work in what you love.</li>\n</ul>\n<p>Last month I’ve moved to a flat in <a href=\"http://en.wikipedia.org/wiki/Vigo\">Vigo</a> with my girlfriend. Lovely city, pretty flat and very nice to be living with her! We’ve been very busy with the move and preparing our wedding. It seems next months won’t be more relaxed…</p>\n",
			"date_published": "2007-01-16T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/11/17/maemo-bugzilla-and-first-build-brigade-irc-meeting/",
			"url": "https://blogs.igalia.com/dape/2006/11/17/maemo-bugzilla-and-first-build-brigade-irc-meeting/",
			"title": "Maemo bugzilla, and first build brigade IRC meeting",
			"content_html": "<h3 id=\"want-a-better-maemo-bugzilla\" tabindex=\"-1\">Want a better Maemo bugzilla! <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/11/17/maemo-bugzilla-and-first-build-brigade-irc-meeting/\">#</a></h3>\n<p>You don’t realize how wonderful <a href=\"http://bugzilla.gnome.org\">Gnome Bugzilla</a> is till you find some other that needs a bit of love.</p>\n<p>As I’ve told in my last post, I’m hacking a bit in <a href=\"http://maemo.org\">Maemo platform</a>, and in particular in <a href=\"http://sardine.garage.maemo.org/\">Sardine Distro</a>. I found some issues I wanted to report, and implemented some patches for them. So, I decided to go to <a href=\"http://bugs.maemo.org\">Maemo Bugzilla</a>. But the experience was far from optimal. I’ve sent two mails to maemo-developers mailing list (<a href=\"http://maemo.org/pipermail/maemo-developers/2006-November/006229.html\">problems with categorization</a>, <a href=\"http://maemo.org/pipermail/maemo-developers/2006-November/006238.html\">suggesting a bug template</a>).</p>\n<h2 id=\"build-brigade-irc-meeting\" tabindex=\"-1\">Build brigade IRC meeting <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/11/17/maemo-bugzilla-and-first-build-brigade-irc-meeting/\">#</a></h2>\n<p>We need to decide what to do in <a href=\"http://live.gnome.org/BuildBrigade\">Build brigade</a> next month. Iago Toral <a href=\"http://mail.gnome.org/archives/build-brigade-list/2006-November/msg00000.html\">has proposed a meeting</a> for next Monday at 16:00 UTC, in #build-brigade IRC. Everyone interested in Gnome testing or continuous integration, is invited. I hope we’ll get at least one meeting per month, but let’s talk this Monday about this. You can read more about the meeting in <a href=\"http://blogs.igalia.com/itoral/?p=28\">Iago’s blog</a> and <a href=\"http://mail.gnome.org/archives/build-brigade-list/2006-November/msg00000.html\">meeting proposal in mailing list</a>. See you there!</p>\n",
			"date_published": "2006-11-17T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/11/10/playing-with-maemo/",
			"url": "https://blogs.igalia.com/dape/2006/11/10/playing-with-maemo/",
			"title": "Playing with Maemo",
			"content_html": "<p>Long time without writing, so let’s retake it. Last weeks I’ve been playing with <a href=\"http://maemo.org\">Maemo</a> and <a href=\"http://www.nokia.com/770\">Nokia 770</a>. Very interesting platform, very familiar for those (like me) with some background in Gnome SDK.</p>\n<p>The device is fantastic. Definitely. Small, light, and with a good screen. But overall, as much powerful as free software is. As this was my main interest, Maemo as a free software based device, I’ve been taking a look at the development platform. Nokia has done a good work, not only providing full access to a Gtk/Gnome based technology, for developing applications, but also offering their platform contributions to the community.</p>\n<p>Interesting points:</p>\n<ul>\n<li>Very similar to Gnome development. Starting to develop Maemo applications for someone with Gnome SDK background is a very very small step.</li>\n<li>Scratchbox. The project is interesting, and it makes very easy debian-based development of software. I can try very dirty pieces without breaking my PC environment.</li>\n<li><a href=\"http://sardine.garage.maemo.org/\">Sardine distribution</a>, to let you work on the bleeding edge platform.</li>\n<li>Good tutorials for startup (see <a href=\"http://maemo.org/platform/docs/tutorials/Maemo_tutorial.html\">Maemo 2.0 tutorial</a> and <a href=\"http://scratchbox.org/download/files/sbox-releases/0.9.8/doc/installdoc.html\">Scratchbox tutorial</a> to start developing).</li>\n</ul>\n<p>Drawbacks:</p>\n<ul>\n<li>Many pieces of the software inside the real device are not available as free software (yet?). Main example is the web browser (Opera), but there are more.</li>\n<li>A bit slow. I suppose that time will let Maemo be faster and lighter (it’s still a very young platform, so there’s a long way to run). We can expect to see heavy improvements in next months and years, and some of this work returning back to Gnome to make it also better).</li>\n<li>It’s _VERY_ difficult to find an RS-MMC (recommended for <a href=\"http://maemo.org/maemowiki/HowTo_GetStartedWithSardine\">dual boot procedure</a>). I’ve walked lots of shops in order to get one, and I’m still waiting for my order.</li>\n</ul>\n<p>Of course, I’ve sent my first bugs and patches mainly related with file selection (<a href=\"https://maemo.org/bugzilla/show_bug.cgi?id=841\">filters bug</a>, <a href=\"https://maemo.org/bugzilla/show_bug.cgi?id=842\">proposal for filters UI</a>). I’ll comment more interesting things I find next days.</p>\n",
			"date_published": "2006-11-10T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/10/06/developer-testing-is-good-for-you/",
			"url": "https://blogs.igalia.com/dape/2006/10/06/developer-testing-is-good-for-you/",
			"title": "Developer! Testing is good for you.",
			"content_html": "<p>Once <a href=\"http://live.gnome.org/BuildBrigade\">Build brigade</a> puts in production the Buildbots for projects, with testing support, we’ll get an automated way to run unit tests of all projects in Gnome umbrella. And <strong>you, developer</strong>, will love to know when your tests fail to get them fixed asap.</p>\n<p>But, to do this, you should implement good tests. How to do these tests? A good unit testing framework is <a href=\"http://check.sf.net\">Check</a>.</p>\n<p>If you need an example: currently, <a href=\"http://blogs.igalia.com/itoral\">Iago Toral</a> is developing unit tests for gtk+ with it. You can <a href=\"http://bonsai.fisterra.org/cgi-bin/bonsai/rview.cgi?dir=gtk%2B/ut&amp;cvsroot=/var/publiccvs&amp;module=default\">browse his work</a> or <a href=\"http://blogs.igalia.com/itoral/?p=24\">read his post</a> to know about them or ask him at <strong>#build-brigade</strong>. It’s an interesting work, and accepting and extending these unit tests in mainstream gtk+ would be great for gtk+ quality.</p>\n<p>Two main goals in your tests:</p>\n<ul>\n<li>The first goal would be <strong>adding tests checking known bugs</strong>. If you detect a bug and don’t want it to be regressed you can add a test to know if it happens again.</li>\n<li>The second goal is <strong>trying to run as many code paths as you can</strong>, in order to be sure they behave as expected, and don’t segfault. Code coverage tools helps you to do it. You can <a href=\"http://gtktests-buildbot.fisterra.org/gnomeslave/gtk+/lcov/\">view current coverage of these gtk+ tests</a>.</li>\n</ul>\n<p>Wouldn’t it be great that you know your new features don’t break other code behaviour before committing? Just run <code>make check</code> and you’ll begin to be more confident about that.</p>\n<p>Of course, these notes apply directly only to C projects. But  replace Check with your favourite test framework for your language ;).</p>\n",
			"date_published": "2006-10-06T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/10/03/first-jhbuild-buildbot-prototypes-online/",
			"url": "https://blogs.igalia.com/dape/2006/10/03/first-jhbuild-buildbot-prototypes-online/",
			"title": "First jhbuild buildbot prototypes online",
			"content_html": "<p>Yesterday I’ve deployed my <a href=\"http://community.igalia.com/twiki/bin/view/Main/JhbuildBuildbot\">jhbuild buildbot</a> prototypes:</p>\n<ul>\n<li><a href=\"http://buildbot.fisterra.org\">Fisterra buildbot</a>: it compiles the CVS HEAD for Fisterra Distribution and Fisterra Garage. It also has some unit tests integrated in fisterra-base. You can check <a href=\"http://buildbot.fisterra.org//fisterra-base/lcov/\">coverage of last results</a>.</li>\n<li><a href=\"http://gtktests-buildbot.fisterra.org\">Gtk with unit tests buildbot</a>: this is a playground to add unit tests implemented with <a href=\"http://check.sf.net\">Check</a> in <a href=\"http://www.gtk.org\">gtk+</a>. These tests have been developed by Iago Toral (<a href=\"http://blogs.igalia.com/itoral/?p=13\">read his post</a>, <a href=\"http://bonsai.fisterra.org/cgi-bin/bonsai/rview.cgi?cvsroot=/var/publiccvs/&amp;dir=gtk%2b/ut&amp;module=gtk%20\">browse his tests</a>).</li>\n</ul>\n<p><a href=\"http://en.wikipedia.org/wiki/Code_coverage\">Code coverage</a> measure consists on counting the portion of code being executed by a set of tests. It gives a quantitative measure of the paths of code covered by our tests, and it’s specially useful for <a href=\"http://en.wikipedia.org/wiki/Unit_test\">unit testing</a>. One of the purposes of <a href=\"http://live.gnome.org/BuildBrigade\">Gnome Build Brigade</a> is adding this kind of measures to the continuous integration loop. This way, we can encourage developers to increase their marks, or even make parallel teams add new tests to the core libraries of Gnome. This would make Gnome more reliable, I think.</p>\n<p>Then, if you want to polish your library or project, please add some pretty tests. If you want help on Check, just contact us in <strong>#build-brigade</strong> gnome irc channel. Better quality in Gnome deserves this!</p>\n<p><strong>PS:</strong> it seems the link to Iago Toral’s test is not working (Bonsai seems to get in troubles when a module has some symbols like <code>+</code>). You can get the source of the tests in our CVS (<code>CVSROOT :pserver:anonymous@cvs.igalia.com:/var/publiccvs MODULE gtk+</code>).</p>\n",
			"date_published": "2006-10-03T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/09/27/reporting-in-buildbot-and-some-fixes-in-fisterra/",
			"url": "https://blogs.igalia.com/dape/2006/09/27/reporting-in-buildbot-and-some-fixes-in-fisterra/",
			"title": "Reporting in buildbot, and some fixes in Fisterra",
			"content_html": "<h3 id=\"jhbuild-buildbot-scripts-reporting\" tabindex=\"-1\">JHBuild Buildbot scripts reporting <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/27/reporting-in-buildbot-and-some-fixes-in-fisterra/\">#</a></h3>\n<p>This week I’ve been adding some new features to my <a href=\"http://community.igalia.com/twiki/bin/view/Main/JhbuildBuildbot\">JHBuild Buildbot scripts</a>, in order to get better reporting and information for projects:</p>\n<ul>\n<li>Implemented a custom Sources changes fetcher for the <em>gnome-cvs-commits</em> mailing list. This code fetchs the mails from a Maildir, and assigns them to the projects, filling the changes column. Now it only matches the projects with the same name of a jhbuild module, but it should be extended.</li>\n<li>Now you can set a list of email addresses to get the build status, using the buildbot MailNotifier. You can also set an IRC bot for each project (establishing an IRC server, channel and nick), using the buildbot IRC object). It’s too much simple, and I like more the IRC bot <a href=\"http://thomas.apestaart.org/log/\">Thomas</a> is using for gstreamer.</li>\n</ul>\n<h3 id=\"fisterra-and-jhbuild\" tabindex=\"-1\">Fisterra and JHBuild <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/27/reporting-in-buildbot-and-some-fixes-in-fisterra/\">#</a></h3>\n<p>In order to get a smaller testing platform for my jhbuild buildbot scripts, I’ve decided to target on <a href=\"http://www.fisterra.org\">Fisterra project</a>. To do this, I had to work a bit on it in order to make it easy to compile in a JHBuild environment:</p>\n<ul>\n<li>Fixed a lot of warnings compiling fisterra in gcc4. Now gcc4 is more strict on compatibility between signed and unsigned types. Then, I had to add a lot of type castings (mainly due to libxml2 and libgnomeprint).</li>\n<li>Improved the code generator scripts, and removed specific prefix dependencies.</li>\n</ul>\n<p>And yes, now I can run Fisterra in a JHBuild environment. The work has been uploaded to the CVS (<a href=\"http://bonsai.fisterra.org/cgi-bin/bonsai/rview.cgi?cvsroot=/var/publiccvs/&amp;dir=fisterra-jhbuild&amp;module=fisterra-jhbuild\">bonsai of fisterra-jhbuild</a>). I’ve uploaded instructions in <a href=\"http://community.igalia.com/twiki/bin/view/Fisterra/ProjectCVS\">Fisterra CVS web</a>. The available modules from the scripts are:</p>\n<ul>\n<li><em>fisterra-base</em>: the main library, including a Postgres/libgda based persistence layer, a client-server Orbit based communication system, a powerful listing system, and a Gtk-based client library.</li>\n<li><em>fisterra-bmodules</em>: implementation of some business objects (actors, documents, tax, payments, …) in fisterra framework, both server objects and client widgets.</li>\n<li><em>fisterra-distribution</em>: a Point-of-shell app built on top of <em>fisterra-base</em> and <em>fisterra-bmodules</em>.</li>\n<li><em>fisterra-repair</em>: an app to manage car repairing garage companies, also built on top of <em>fisterra-base</em> and <em>fisterra-bmodules</em>.</li>\n</ul>\n",
			"date_published": "2006-09-27T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/09/21/igalia-celebrates-5th-anniversary/",
			"url": "https://blogs.igalia.com/dape/2006/09/21/igalia-celebrates-5th-anniversary/",
			"title": "Igalia celebrates 5th anniversary",
			"content_html": "<p>Yes! Today, 21-September-2006, <a href=\"http://www.igalia.com/en/press_room/news/igalia_celebra_su_quinto_aniversario\">Igalia celebrates its 5th anniversary</a>!</p>\n<p><img src=\"https://blogs.igalia.com/dape/2006/09/21/igalia-celebrates-5th-anniversary/images/nad0001_001sized.jpg\" alt=\"nad0001_001sized.jpg\"></p>\n<p>An example of party at Igalia office (year 2003 ;))</p>\n",
			"date_published": "2006-09-21T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/",
			"url": "https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/",
			"title": "JHBuild Buildbot integration scripts... done!",
			"content_html": "<h3 id=\"buildbot\" tabindex=\"-1\">Buildbot <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/\">#</a></h3>\n<p>Yesterday I’ve ended the documentation and upload of my JHbuild buildbot integration scripts. I’ve published a manual (<a href=\"http://community.igalia.com/twiki/bin/view/Main/JhbuildBuildbot\">jhbuild buildbot manual</a>). With this bot, I’ve got:</p>\n<ul>\n<li>All Gnome 2.16 modules compiling in <a href=\"http://buildbot.sf.net\">Buildbot</a>. The configuration files use the <a href=\"http://cvs.gnome.org/viewcvs/jhbuild/modulesets/gnome-2.16.modules?view=markup\">JHBuild Gnome 2.16 moduleset</a>. My scripts are easily configurable to use other modulesets.</li>\n<li>There’s a preliminar support for tests and coverage. Currently, it runs <code>make check</code> for all Gnome modules supporting this <code>Makefile</code> rule, and then it runs <code>lcov</code> to explore the <a href=\"http://en.wikipedia.org/wiki/Code_coverage\">code coverage</a> of those tests.</li>\n<li>I’ve done some tricks to avoid a huge load produced by buildbot managing and compiling too much modules.</li>\n</ul>\n<p>This week I’ll try to get this server in production (currently I’m running it in my PC). There’s a CVS with the scripts, and you can browse them with our <a href=\"http://bonsai.fisterra.org/cgi-bin/bonsai/rview.cgi?cvsroot=/var/publiccvs/&amp;dir=jhbuild-buildbot-scripts&amp;module=jhbuild-buildbot-scripts\">Bonsai</a>.</p>\n<h3 id=\"fisterra\" tabindex=\"-1\">Fisterra <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/\">#</a></h3>\n<p>These days I’ve also been preparing some patches for <a href=\"http://www.fisterra.org\">Fisterra framework and apps</a> to get them easily compiled in JHBuild. I’ll try to merge the patches this week. They fix compilation with gcc 4, and make it easier to get the modules compiled in a non-standard prefix (as JHBuild requires).</p>\n<h3 id=\"5th-aniversary-of-igalia\" tabindex=\"-1\">5th aniversary of Igalia <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/\">#</a></h3>\n<p>Next thursday <a href=\"http://www.igalia.com\">Igalia</a> will be 5 years old. Fine to be such a long time with these guys :).</p>\n<h3 id=\"weekend\" tabindex=\"-1\">Weekend <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/09/19/jhbuild-buildbot-integration-scripts-done/\">#</a></h3>\n<p>Last saturday, I’ve gone with my girlfriend to an interesting restaurant in Vigo (Rosalia de Castro street). Tasty food (in special the hot chocolate fritters, or the veal tenderloin with strawberries and cheese sauce).</p>\n",
			"date_published": "2006-09-19T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/09/08/gcov-tests-with-gcc-4-x/",
			"url": "https://blogs.igalia.com/dape/2006/09/08/gcov-tests-with-gcc-4-x/",
			"title": "GCov tests with gcc 4.x",
			"content_html": "<p>Thanks to <a href=\"http://thomas.apestaart.org/log/\">Thomas</a> for giving me this clue to get coverage reports with code compiled with gcc 4.x.</p>\n<p>In order to get coverage logs, <code>CFLAGS</code> should include <code>-fprofile-arcs -ftest-coverage</code> (and preferrable to get also <code>-O0</code>). It’s what the gcc manual <a href=\"http://gcc.gnu.org/onlinedocs/gcc-4.1.1/gcc/Gcov-Data-Files.html#Gcov-Data-Files\">says</a>. It was enough for gcc 3.x. But in gcc 4.x it may not. When you link, you can get this error:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">hidden symbol `__gcov_init' <span class=\"token keyword\">in</span> /usr/lib/gcc/i486-linux-gnu/4.0.3/libgcov.a<span class=\"token punctuation\">(</span>_gcov.o<span class=\"token punctuation\">)</span> is referenced by DSO</code></pre>\n<p>The problem seems to be caused by the linker. In theory, adding <code>-ftest-coverage</code> should imply linking gcov automatically. But it’s not true, at least in my case. The solution: <strong>adding <code>-lgcov</code> to the <code>LDFLAGS</code></strong>. Finally I have coverage reports for my gcc 4.1.2 compiler.</p>\n<pre class=\"language-makefile\" tabindex=\"0\"><code class=\"language-makefile\">CFLAGS<span class=\"token operator\">=</span>-fprofile-arcs -ftest-coverage<br>LDFLAGS<span class=\"token operator\">=</span>-lgcov</code></pre>\n",
			"date_published": "2006-09-08T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/08/31/i-m-back/",
			"url": "https://blogs.igalia.com/dape/2006/08/31/i-m-back/",
			"title": "I&#39;m back",
			"content_html": "<p>First, hello to <a href=\"http://planet.gnome.org\">planet.gnome.org</a>! Thanks to <a href=\"http://perkypants.org/blog/\">jdub</a> for making it possible. Briefly, I’m working at <a href=\"http://www.igalia.com\">igalia</a> as a member of the Gnome technologies team, and currently I’m devoted to continuous integration and testing tasks under the umbrella of the <a href=\"http://live.gnome.org/BuildBrigade\">Build brigade group</a>. Hackergotchie is comming up, stay tuned.</p>\n<p>Back from holidays. Some notes:</p>\n<ul>\n<li>I’ve been for nearly a week in Portugal (Lisbon, Fatima, Castelo Branco, Alto Douro, Viana do Castelo, …). Very nice country… and food.</li>\n<li>Nearly one month of relax far away from a computer in Vigo. I did some local tourism there (visited Gondomar, Mondariz, Villasobroso, A Cañiza, and some nice places in Vigo like <a href=\"http://www.vigoenfotos.com/n_cepudo_1.html\">Monte Alba</a>, or the boat restaurant in the harbour).</li>\n<li>The worst thing was the terrible forest fire wave in Galicia. In Vigo I couldn’t even see the sun for a week, as all the sky was covered by a dense smoke cloud (photos in flicker <a href=\"http://www.flickr.com/photos/hormiga/210975274/\">Vigo Harbour</a>, <a href=\"http://www.flickr.com/photos/hormiga/210975270/\">Rande Bridge</a>).</li>\n</ul>\n<p>But now I’m back. These days I’ll try to go on with the efforts around Buildbot and jhbuild integration:</p>\n<ul>\n<li>I’ve got Buildbot running with jhbuild, as I told in my last post.</li>\n<li>Buildbot is not designed for this use, and it’s a problem because I want to provide better aggregated views (build status of a slave, last builds of a module, information about an specific module, etc). I’ll try to improve a bit these views.</li>\n<li>I need better unit test support in buildbot (specifically I would like to integrate coverage reports as Thomas has done in gstreamer buildbot, and better unit tests log reports).</li>\n</ul>\n",
			"date_published": "2006-08-31T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/28/jhbuild-buildbot/",
			"url": "https://blogs.igalia.com/dape/2006/07/28/jhbuild-buildbot/",
			"title": "Jhbuild + buildbot",
			"content_html": "<p>This week I’ve been playing a bit with buildbot, in order to complete my hack to get gnome jhbuild integrated with it. And it’s running now. I’ve uploaded a patch to buildbot project webpage (<a href=\"http://sourceforge.net/tracker/index.php?func=detail&amp;aid=1530272&amp;group_id=73177&amp;atid=537003\">buildbot jhbuild support patch in sf</a>).</p>\n<p>The ideas of the patch are the following:</p>\n<ul>\n<li>Each module in a jhbuild module set gets a factory</li>\n<li>The factory can be used to get builders in many slaves. Then we’ll get a builder for each factor and slave.</li>\n<li>An scheduler for each builder. In my case the first module uses a Periodic scheduler, and the others are Serial depending on the previous module.</li>\n</ul>\n<p>I added an example of a configuration file for buildbot master to get this working. Now I’m running gnome jhbuild in my machine. Saddenly the UI of buildbot does not scale very well for large sets of modules :(.</p>\n<p>A capture here:</p>\n<p><img src=\"https://blogs.igalia.com/dape/2006/07/28/jhbuild-buildbot/images/jhbuild-buildbot-capture.jpeg\" alt=\"jhbuild-buildbot-capture.jpeg\"></p>\n",
			"date_published": "2006-07-28T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/24/cvs-diff-of-new-files-over-a-read-only-cvs-repository-access/",
			"url": "https://blogs.igalia.com/dape/2006/07/24/cvs-diff-of-new-files-over-a-read-only-cvs-repository-access/",
			"title": "cvs diff of new files over a read-only CVS repository access",
			"content_html": "<p>Today I’ve found a problem trying to create a patch with cvs. I haven’t got write access to gnome repository. This way I have only read-only access to it. The problem is when I try to create a patch with new files involved.</p>\n<p>If I run:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">$ cvs <span class=\"token function\">diff</span> <span class=\"token parameter variable\">-N</span></code></pre>\n<p>it ignores the new files. If I run:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">$ cvs <span class=\"token function\">diff</span> <span class=\"token parameter variable\">-N</span> changedfile1 changedfile2 newfile1 newfile2</code></pre>\n<p>it says something like this for every new file:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">cvs server: I know nothing about newfile1</code></pre>\n<p>An easy solution would be creating two patches:</p>\n<ul>\n<li>One patch using standard cvs diff</li>\n<li>A second patch for new files, using diff command against <code>/dev/null</code>.</li>\n</ul>\n<p>But there’s another trick to get a standard patch from cvs. You have to edit the CVS/Entries file of the directory you want to add a file. For example, for a <code>newfile1</code> in directory <code>directory1</code>, you would change the <code>directory1/CVS/Entries</code> file and add an entry like this one:</p>\n<pre><code>/newfile1/0/Initial newfile1//\n</code></pre>\n<p>Then I can do a standard cvs diff command like this one:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">$ cvs <span class=\"token function\">diff</span> <span class=\"token parameter variable\">-N</span></code></pre>\n<p>and this will include the new files in the patch. Of course you can use the <code>-u</code>/<code>-U</code> parameters to get unified format patches.</p>\n",
			"date_published": "2006-07-24T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/17/dbus-support-for-jhbuild-2/",
			"url": "https://blogs.igalia.com/dape/2006/07/17/dbus-support-for-jhbuild-2/",
			"title": "DBus support for JHBuild (2)",
			"content_html": "<p>Since the last post, I’ve began to implement all the required methods to provide a complete DBus functionality in jhbuild. I added the following DBus interfaces:</p>\n<ul>\n<li><em>dict:string,dict:string,string</em> <strong>info (</strong><em>array:string</em> <strong>modulelist)</strong>: returns the information about the modules requested.</li>\n<li><em>array:struct:string,string</em> <strong>list (</strong><em>array:string</em> <strong>modulelist)</strong>: returns the list of required dependencies required to compile the module list.</li>\n<li><em>async</em> <strong>update (</strong><em>array:string</em> <strong>modulelist,</strong> <em>array:string</em> <strong>skiplist,</strong> <em>string</em> <strong>start,</strong> <em>string</em> <strong>datespec)</strong>: updates the modules in <strong>modulelist</strong> and their dependencies, starting in <strong>start</strong> parameter, ignoring the modules in <strong>skiplist</strong>. The checkouts will be obtained using the <strong>datespec.</strong></li>\n<li><em>async</em> <strong>updateone (</strong><em>array:string</em> <strong>modulelist,</strong> <em>string</em> <strong>datespec)</strong>: updates the modules in <strong>modulelist</strong>. The checkouts will be obtained using the <strong>datespec.</strong></li>\n<li><em>async</em> <strong>build (</strong><em>array:string</em> <strong>modulelist,</strong> <em>array:string</em> <strong>skiplist,</strong> <em>string</em> <strong>start,</strong> <em>string</em> <strong>datespec,</strong> <em>dict:string,string</em> <strong>options)</strong>: builds the modules in <strong>modulelist</strong> and their dependencies, starting in <strong>start</strong> parameter, ignoring the modules in <strong>skiplist</strong>. The checkouts will be obtained using the <strong>datespec</strong>. The <strong>options</strong> parameter is a dictionary of additional options. They can be <strong>autogen</strong> (forces <a href=\"http://autogen.sh\">autogen.sh</a> of the module), <strong>clean</strong> (calls make clean before compiling the modules) and <strong>nonetwork</strong> (forces avoiding access to network, and then, avoids accessing version control repositories).</li>\n<li><em>async</em> <strong>buildone (</strong><em>array:string</em> <strong>modulelist,</strong> <em>string</em> <strong>datespec</strong>**,** <em>dict:string,string <strong>options</strong></em><strong>)</strong>: builds the modules in <strong>modulelist</strong>. The checkouts will be obtained using the <strong>datespec</strong>. The <strong>options</strong> work in the same way as in <strong>build</strong> command.</li>\n<li><em>void</em> <strong>set_autogen(</strong><em>boolean</em> <strong>enabled)</strong>: set the global autogen parameter value. If set, it will run always <a href=\"http://autogen.sh\">autogen.sh</a> before compiling modules.</li>\n<li><em>boolean</em> <strong>get_autogen()</strong>: obtains the current global autogen parameter value.</li>\n<li><em>void</em> <strong>set_clean(</strong><em>boolean</em> <strong>enabled)</strong>: set the global makeclean parameter value. If set, it will run always run make clean before compiling modules.</li>\n<li><em>boolean</em> <strong>get_ckean()</strong>: obtains the current global autogen parameter value.</li>\n<li><em>void</em> <strong>set_nonetwork(</strong><em>boolean</em> <strong>enabled)</strong>: set the global nonetwork parameter value. If set, it will run always avoid networks access (and checkouts) for <strong>build</strong> and <strong>buildone</strong> commands.</li>\n<li><em>boolean</em> <strong>get_nonetwork()</strong>: obtains the current global nonetwork parameter value.</li>\n<li><em>struct:string,string,string</em> <strong>get_status()</strong>: obtains the current compilation status. The struct has three fields: the current command (build, buildone, updateone, update), the current module, and the status/phase (idle, build, checkout, …).</li>\n</ul>\n<p>You can hook to the following signals:</p>\n<ul>\n<li><strong>start_build_signal ()</strong>: called when a build starts.</li>\n<li><strong>end_build_signal (</strong><em>array:string</em> <strong>failures)</strong>:called when the build ends. <strong>failures</strong> contains the list of modules that did fail.</li>\n<li><strong>start_module_signal (</strong><em>string</em> <strong>module)</strong>: called when a module build starts. <strong>module</strong> contains the name of the module.</li>\n<li><strong>end_module_signal (</strong><em>string</em> <strong>module,</strong> <em>boolean</em> <strong>failed)</strong>: called when a module build ends. <strong>module</strong> contains the name of the module. <strong>failed</strong> tells if the module build has failed.</li>\n<li><strong>start_phase_signal (</strong><em>string</em> <strong>module,</strong> <em>string</em> <strong>state)</strong>: called when a phase in the build of a module begins. <strong>module</strong> contains the name of the module. <strong>state</strong> contains the name of the phase.</li>\n<li><strong>end_phase_signal (</strong><em>string</em> <strong>module,</strong> <em>string</em> <strong>state,</strong> <em>boolean</em> <strong>failed)</strong>: called when a phase in the build of a module ends.<strong>module</strong> contains the name of the module. <strong>state</strong> contains the name of the phase. <strong>failed</strong> tells if the build phase has failed.</li>\n<li><strong>message (</strong><em>string</em> <strong>message)</strong>: all the messages from compilation logs are sent through this signal. If you want to retrieve the compilation log, you should hook and get all this messages. This signal is called in each phase with a frequency of 5 seconds.</li>\n</ul>\n<p>The biggest problem I’ve faced these days was related to the way I could launch the subprocess for build/update commands. As it’s called in a dbus handler, I couldn’t put the jhbuild script in another thread. And the method should return immediately without waiting for the end of the command. I had to use the <code>gobject.Idle</code> interface to hook the build to the gobject mainloop, this way:</p>\n<ol>\n<li>I implemented a <code>gobject.Idle</code> child, and add a callback implementation calling the builder script and returning False (in order to be called only one time).</li>\n<li>I attach this idle object to the mainloop, and then end the dbus procedure implementation.</li>\n</ol>\n<p>The code for the idle child is something like this:</p>\n<pre class=\"language-python\" tabindex=\"0\"><code class=\"language-python\"><span class=\"token keyword\">class</span> <span class=\"token class-name\">JHBuildBuilderIdle</span><span class=\"token punctuation\">(</span>gobject<span class=\"token punctuation\">.</span>Idle<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><br><span class=\"token keyword\">def</span> <span class=\"token function\">__init__</span><span class=\"token punctuation\">(</span>self<span class=\"token punctuation\">,</span> builder<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><br>gobject<span class=\"token punctuation\">.</span>Idle<span class=\"token punctuation\">.</span>__init__<span class=\"token punctuation\">(</span>self<span class=\"token punctuation\">)</span><br>self<span class=\"token punctuation\">.</span>set_callback<span class=\"token punctuation\">(</span>self<span class=\"token punctuation\">.</span>callback<span class=\"token punctuation\">,</span> builder<span class=\"token punctuation\">)</span><br><br><span class=\"token keyword\">def</span> <span class=\"token function\">callback</span><span class=\"token punctuation\">(</span>self<span class=\"token punctuation\">,</span> builder<span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><br>builder<span class=\"token punctuation\">.</span>build<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><br><span class=\"token keyword\">return</span> <span class=\"token boolean\">False</span></code></pre>\n<p>And the call in the dbus handler is this:</p>\n<pre class=\"language-python\" tabindex=\"0\"><code class=\"language-python\"><span class=\"token decorator annotation punctuation\">@dbus<span class=\"token punctuation\">.</span>service<span class=\"token punctuation\">.</span>method</span><span class=\"token punctuation\">(</span><span class=\"token string\">'org.gnome.JHBuildIFace'</span><span class=\"token punctuation\">)</span><br><span class=\"token keyword\">def</span> <span class=\"token function\">build</span><span class=\"token punctuation\">(</span>self<span class=\"token punctuation\">,</span> modulelist <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> skiplist <span class=\"token operator\">=</span> <span class=\"token punctuation\">[</span><span class=\"token punctuation\">]</span><span class=\"token punctuation\">,</span> start <span class=\"token operator\">=</span> <span class=\"token boolean\">None</span><span class=\"token punctuation\">,</span> datespec <span class=\"token operator\">=</span> <span class=\"token boolean\">None</span><span class=\"token punctuation\">,</span> options <span class=\"token operator\">=</span> <span class=\"token punctuation\">{</span><span class=\"token punctuation\">}</span><span class=\"token punctuation\">)</span><span class=\"token punctuation\">:</span><br><span class=\"token punctuation\">[</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">.</span><span class=\"token punctuation\">]</span><br><br>build <span class=\"token operator\">=</span> jhbuild<span class=\"token punctuation\">.</span>frontends<span class=\"token punctuation\">.</span>get_buildscript<span class=\"token punctuation\">(</span>ownconfig<span class=\"token punctuation\">,</span> module_list<span class=\"token punctuation\">)</span><br><br>idle <span class=\"token operator\">=</span> JHBuildBuilderIdle<span class=\"token punctuation\">(</span>build<span class=\"token punctuation\">)</span><br>idle<span class=\"token punctuation\">.</span>attach<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span><br><span class=\"token keyword\">return</span></code></pre>\n<p>Tomorrow I’ll add this work to the jhbuild bugzilla in order to begin the discussion upstream.</p>\n",
			"date_published": "2006-07-17T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/13/dbus-support-for-jhbuild/",
			"url": "https://blogs.igalia.com/dape/2006/07/13/dbus-support-for-jhbuild/",
			"title": "DBus support for JHBuild",
			"content_html": "<p>Today I’ve began the experiment to add DBus support to JHBuild. To do this I’ve established these goals:</p>\n<ul>\n<li>Be able to launch JHBuild as a DBus service. Also test how can I set it up with DBus activation.</li>\n<li>Implement a list method, equivalent to jhbuild list command.</li>\n<li>Implement one of the build methods.</li>\n</ul>\n<p>And guys, I did it! I’ve added a new dbus command to jhbuild to launch it as a DBus service, and begin to wait for user calls.And added a dbus frontend to buildscript. This frontend offers signals to know the status of a compilation (offering methods to get the logs, and to know the current status of compilation.</p>\n<p>If I run:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">dape@bonus:~$ dbus-send --print-reply <span class=\"token parameter variable\">--dest</span><span class=\"token operator\">=</span><span class=\"token string\">'org.gnome.JHBuild'</span><br><br>/org/gnome/JHBuildObject org.gnome.JHBuildIFace.build</code></pre>\n<p>it launches a complete build. For example, I can use <code>dbus-monitor</code> to view events, and I get a log like this one:</p>\n<pre class=\"language-bash\" tabindex=\"0\"><code class=\"language-bash\">signal <span class=\"token assign-left variable\">sender</span><span class=\"token operator\">=</span>:1.95 -<span class=\"token operator\">></span> <span class=\"token assign-left variable\">dest</span><span class=\"token operator\">=</span><span class=\"token punctuation\">(</span>null destination<span class=\"token punctuation\">)</span> <span class=\"token assign-left variable\">interface</span><span class=\"token operator\">=</span>org.gnome.JHBuildIFace<span class=\"token punctuation\">;</span> <span class=\"token assign-left variable\">member</span><span class=\"token operator\">=</span>end_phase_signal<br><br>string <span class=\"token string\">\"libxml2\"</span>    string <span class=\"token string\">\"checkout\"</span>    string <span class=\"token string\">\"false\"</span><br><br>signal <span class=\"token assign-left variable\">sender</span><span class=\"token operator\">=</span>:1.95 -<span class=\"token operator\">></span> <span class=\"token assign-left variable\">dest</span><span class=\"token operator\">=</span><span class=\"token punctuation\">(</span>null destination<span class=\"token punctuation\">)</span> <span class=\"token assign-left variable\">interface</span><span class=\"token operator\">=</span>org.gnome.JHBuildIFace<span class=\"token punctuation\">;</span> <span class=\"token assign-left variable\">member</span><span class=\"token operator\">=</span>start_phase_signal<br><br>string <span class=\"token string\">\"libxml2\"</span>    string <span class=\"token string\">\"build\"</span><br><br>signal <span class=\"token assign-left variable\">sender</span><span class=\"token operator\">=</span>:1.95 -<span class=\"token operator\">></span> <span class=\"token assign-left variable\">dest</span><span class=\"token operator\">=</span><span class=\"token punctuation\">(</span>null destination<span class=\"token punctuation\">)</span> <span class=\"token assign-left variable\">interface</span><span class=\"token operator\">=</span>org.gnome.JHBuildIFace<span class=\"token punctuation\">;</span> <span class=\"token assign-left variable\">member</span><span class=\"token operator\">=</span>message_signal<br><br>string <span class=\"token string\">\"Building libxml2\"</span><br><br>signal <span class=\"token assign-left variable\">sender</span><span class=\"token operator\">=</span>:1.95 -<span class=\"token operator\">></span> <span class=\"token assign-left variable\">dest</span><span class=\"token operator\">=</span><span class=\"token punctuation\">(</span>null destination<span class=\"token punctuation\">)</span> <span class=\"token assign-left variable\">interface</span><span class=\"token operator\">=</span>org.gnome.JHBuildIFace<span class=\"token punctuation\">;</span> <span class=\"token assign-left variable\">member</span><span class=\"token operator\">=</span>message_signal<br><br>string \"make  all-recursive make<span class=\"token punctuation\">[</span><span class=\"token number\">1</span><span class=\"token punctuation\">]</span>:<br><br>se ingresa al directorio `/usr/local/devel/dape/cvs/libxml2'<br><br>Making all <span class=\"token keyword\">in</span> include</code></pre>\n<p>This way we can communicate with the JHBuild compilation loop. It should be interesting to avoid running jhbuild (and loading python) for each compilation in an integration loop. But it also adds an asynchronous channel to communicate with continuous integration tools. I’ll go on completing the dbus interface for most used commands of jhbuild tomorrow, and also add customizability (currently it does not wait for many kinds of parameters, it should be improved). When it’s more polished I’ll propose the patch for jhbuild.</p>\n",
			"date_published": "2006-07-13T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/12/adapting-jhbuild-to-continuous-integration/",
			"url": "https://blogs.igalia.com/dape/2006/07/12/adapting-jhbuild-to-continuous-integration/",
			"title": "Adapting JHBuild to continuous integration",
			"content_html": "<p>These days I’ve been doing some work in JHBuild to make life easier for those who want to run it from a Continuous Integration loop. In particular, it should improve the experience for those using Buildbot (as we intend in the <a href=\"http://live.gnome.org/BuildBrigade\">Gnome Build Brigade</a>).</p>\n<h3 id=\"checkout-modes-for-jhbuild\" tabindex=\"-1\">Checkout modes for JHBuild <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/07/12/adapting-jhbuild-to-continuous-integration/\">#</a></h3>\n<p>In Buildbot, most used build scripts  enable you to decide how checkouts are done. The options are currently four:</p>\n<ul>\n<li>Update mode: checkout in a directory, then updates over the same folder you’re compiling the tree. It’s the way jhbuild works currently. Problem: sometimes there are generated files in CVS/SVN/whatever. When you update them, they can raise merge conflicts. It’s very typical in gtk-doc templates.</li>\n<li>Clobber mode: it wipes the build directory before checking out. Then, every time you compile, you need to download the full tree. It avoids conflicts as there are no merges. If you use clobber always, it’s good to have a compilation cache to avoid compiling all the trees every time.</li>\n<li>Export mode: similar to clobber, but it uses export instead of checkout. Basically, the difference is you don’t get version control information in the build tree (and then you can use it directly to generate source package, for example).</li>\n<li>Copy mode: works as in update mode, but version control is done in a different directory. Then, after checking out/updating, it copies all the files to the directory it will use to compile. It avoids the merge conflicts problem, enables you to do development over the version control directory being able to create patches easily. Unfortunately it stores the files in version control 2 times.</li>\n</ul>\n<p>For me, the most interesting options are the copy mode and the clobber mode. In continuous integration loops, we usually don’t add modifications over the compilation trees, but merge conflicts raise frequently. With those options, you get an easy solution.</p>\n<p>I’ve added a <a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=347114\">bug  and a patch to JHBuild bugzilla to support checkout options</a>.  With this patch, you can establish a global checkout mode, and specific checkout modes for each module in jhbuildrc file.</p>\n<h3 id=\"running-make-check-from-jhbuild-without-breaking-all-the-moduleset-build-every-time\" tabindex=\"-1\">Running make check from jhbuild without breaking all the moduleset build every time <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/07/12/adapting-jhbuild-to-continuous-integration/\">#</a></h3>\n<p>Currently JHBuild lets you run <code>make check</code> for modules using the autotools module type. If you set the <code>makecheck</code> variable in <code>jhbuildrc</code>, it will run <code>make check</code> before installing the packages. The problem with this is: if the checks fail, then JHBuild considers the module is broken, and then it will not compile the modules depending on it.</p>\n<p>I’ve written a patch to change this behavior. With this, you can set up a <code>unittestfails</code> variable (currently defaulting to <code>True</code> to maintain current behavior) in <code>jhbuildrc</code>. If this variable is false, <code>make check</code> failures don’t break compilations of the module. It goes on compiling, but shows a message in the log warning about this.</p>\n<p>There was an old bug talking about this in JHBuild bugzilla. I <a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=310544\">added the patch to this bug</a>.</p>\n<h3 id=\"future-work\" tabindex=\"-1\">Future work <a class=\"header-anchor\" href=\"https://blogs.igalia.com/dape/2006/07/12/adapting-jhbuild-to-continuous-integration/\">#</a></h3>\n<p>Two ideas for JHBuild integration with other tools:</p>\n<ul>\n<li>Convert JHBuild to a Python module. Doing this we could have a version control independent library for python scripts. Buildbot is implemented in Python, so it can be a solution to integrate JHBuild with more flexibility.</li>\n<li>Add DBus support. It would be interesting that applications could fire compilations, and register to wait for different events of the compilations (end of each module, stage or so). Then we could have a better control of the compilation process from external tools.</li>\n</ul>\n<p>I’ll be working on this next week.</p>\n",
			"date_published": "2006-07-12T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/07/04/guadec-2006-experience-build-brigade/",
			"url": "https://blogs.igalia.com/dape/2006/07/04/guadec-2006-experience-build-brigade/",
			"title": "Guadec 2006 experience. Build brigade!",
			"content_html": "<p>As I wrote last week, I’ve been attending to <a href=\"http://2006.guadec.org\">Guadec 2006 in Vilanova i la Geltru</a>. A very positive experience, with lots of meetings, talks and interesting events in general.</p>\n<p>For last thursday, there was a <a href=\"http://guadec.org/node/268\">BOF about Continuous Integration</a> hosted by Juan José Sánchez. There, I presented the work with Tinderbox here in Igalia, and we discussed a bit about the requirements for a Continuos Integration service in Gnome project. The interest was more than I expected, and we shared our experiences and decided to create a work group to get the infrastructure up. It’s called <a href=\"http://live.gnome.org/BuildBrigade\">Gnome Build Brigade</a>, and we’ve set up a <a href=\"http://live.gnome.org/BuildBrigade\">wiki</a> (yes, all belongs to wiki!) Currently we have the fantastic work from Frederic Peters (<a href=\"http://jhbuild.bxlug.be/\">JHAutobuild</a>). It’s running last months, and offers RSS information about compilation status for developers.</p>\n<p>But Thomas Vander Stichele proposed <a href=\"http://buildbot.sourceforge.net/\">Buildbot</a>, as they’re using it in Fluendo for gstreamer. Main reasons: well maintained, big community and more mature. So it seems the way to go, and I began to test it yesterday, as I’m planning to move Fisterra continuous integration to Buildbot.</p>\n<p>Yes, buildbot is very easy. It’s more strict about clients (slaves in its notation), but it lets server administrator get a better control over them. And the model is more secure. I’ve also been learning a bit about Twisted, the framework it uses to implement an asynchronous events system. I also implemented a prototype of integration with jhbuild that is working for small sets of modules, but does not handle dependencies yet. My compromise with Build brigade group is about jhbuild work anyway. I’ll be these days working on JHBuild integration, to extend it to fit better with CI systems (<a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=344888\">I’ve prepared a patch for stages splitting</a>, and I would want to add more options to checkout). I’ll write about these next days.</p>\n",
			"date_published": "2006-07-04T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/06/25/guadec-2006/",
			"url": "https://blogs.igalia.com/dape/2006/06/25/guadec-2006/",
			"title": "Guadec 2006",
			"content_html": "<p>Yes, I’ve got to Guadec this Friday. After a long trip (fast flight, not so fast train trip), we’ve arrived to the Gnome Village. There you can find lots of Gnomers from nearly any project you’re interested in.</p>\n<p>Vilanova i la Geltru is a quiet village near Barcelona. Unfortunately the Gnome Village is not near the Guadec rooms, so there’s been lots of problems to move people among the places (dining rooms, centre of the town, bungallows in the village). Also it seems there are few taxis, so if you do something wrong with bus schedules, you’re driving in heavy problems.</p>\n<p>But it’s Guadec. Lot’s of interesting presentations. Now I’m writing from the <em>Carpa</em> and listening to an interesting speech by John Laerum about how to do a good presentation. I’ve realized I’m not very good at my presentations, but the tips are interesting, and I hope I learn something for my own presentations in the future.</p>\n<p>Meanwhile I’m going on improving the <a href=\"http://tbox3.igalia.com\">Gnome Tinderbox 3 deployment at Igalia</a>. Hope it can be complete enough this Thursday, so we can talk about this in the Continuous Integration BOF here in Guadec.</p>\n",
			"date_published": "2006-06-25T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/06/21/tinderbox-3-building-gnome/",
			"url": "https://blogs.igalia.com/dape/2006/06/21/tinderbox-3-building-gnome/",
			"title": "Tinderbox 3 building Gnome",
			"content_html": "<p>Here in <a href=\"http://www.igalia.com\">Igalia</a> we’re very interested in some aspects of software quality, and in special in continuous integration. We’re very used to have continuous integration services for our projects, as one of the fundamentals to be handle a group of programmers accessing the same repository.</p>\n<p>For doing this, we’ve used three different continuous integration servers, depending on the project:</p>\n<ul>\n<li>Tinderbox2, for Gnome technologies based projects. For example, you can see our <a href=\"http://tinderbox.fisterra.org/fisterra.express.html\">Fisterra tinderbox</a>, running compilations every two hours of our middleware.</li>\n<li>Cruisecontrol, for Java/PHP web based projects.</li>\n<li>Tinderbox3, the one I’m working on these days.</li>\n</ul>\n<p>As a result of the <a href=\"http://blogs.igalia.com/dape/petition%20in%20Gnome%20Love%20list%20for%20a%20tinderbox\">petition in Gnome Love list for a tinderbox</a> we started an effort to adapt Tinderbox to Jhbuild, and run modulesets inside. It led to two parallel works, one with Tinderbox2 and one with Tinderbox3. Meanwhile, <a href=\"http://www.bxlug.be/\">BxLUG</a> has done some work in the same direction, which you can view in the <a href=\"http://jhbuild.bxlug.be/\">Gnome JhAutobuild webpage</a>.</p>\n<p>The Tinderbox2 work was done easily. You can check the current experimental status of the portal in our <a href=\"http://gnometinderbox.igalia.com/gnome.express.html\">Tinderbox2 based Gnome Tinderbox</a>. We are integrating unit tests and coverage in it, and we’ve also added RSS feeds for the modules (see <a href=\"http://tinderbox.fisterra.org/\">here</a>).</p>\n<p>And now I’m working in a Tinderbox 3 setup. As the T2 one, it’s experimental. You can check it in our <a href=\"http://tbox3.igalia.com\">Gnome Tinderbox3 webpage</a>. I’m tweaking and hacking this days, so it’s changing fast. Some features are:</p>\n<ul>\n<li>A wonderful <em>show all builds</em> view, containing the last compilations of all modules in a time line.</li>\n<li>Rss feeds for every module, and one for all the trees. With them you can subscribe to the last compilation failures.</li>\n<li>SSL enabled client and server. The comunications in Tinderbox3 are done through HTTP/HTTPS, making easy to do a secure configuration of clients.</li>\n<li>The compilation clients detect the modules containing a <em>make check</em> rule in Makefile, and runs the tests. As in Tinderbox pure tradition, you can see the modules failing in unit tests stage as orange.</li>\n</ul>\n<p>I would like to share this effort with community. Related to this, a BOF will be held in GUADEC 2006 next week in Vilanova. There’s more information about the <a href=\"http://guadec.org/node/268\">continuous integration BOF in GUADEC webpage</a>. It will be on next Thursday 29 at 12:00. See you there!</p>\n",
			"date_published": "2006-06-21T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/06/06/my-first-programming-book/",
			"url": "https://blogs.igalia.com/dape/2006/06/06/my-first-programming-book/",
			"title": "My first programming book",
			"content_html": "<p>Yesterday I saw an entry in <a href=\"http://blogs.igalia.com/juanjo\">Juanjo’s blog</a> talking about the book he started to learn programming with. And immediately I decided I had to look for my own start book.</p>\n<p>It was when I was 7 years old, and my parents gave us an MSX computer (a <a href=\"http://www.computermuseum.li/Testpage/Toshiba-HX-10-homecomputer.htm\">Toshiba HX-10</a>). One month later my father decided he wanted to implement software for his business (he’s got a clinic laboratory), and bought a very big book, <em>MSX, Guía del programador y manual de referencia</em> (<em>The Complete MSX Programmers Guide</em>, by T.Sato, P.Mapstone and I.Muriel, 1984).</p>\n<p><img src=\"https://blogs.igalia.com/dape/2006/06/06/my-first-programming-book/images/msx-guia-programador.jpg\" alt=\"Front of the book\"></p>\n<p>It was the beginning. Since them my life was heavily tied to computer science. I’ve played many games in computers, written a lot of documents, pictures or even music. But the start was a BASIC program. This year I’ve done 20 years programming.</p>\n",
			"date_published": "2006-06-06T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/04/25/let-s-kill-the-templates-in-gtk-doc/",
			"url": "https://blogs.igalia.com/dape/2006/04/25/let-s-kill-the-templates-in-gtk-doc/",
			"title": "Let&#39;s kill the templates in gtk-doc",
			"content_html": "<p>Once upon a time, there was gtk-doc. It was weird and strange, and overall, different from other standards like javadoc or doxygen. But we gnomish people, were tied to it, as it was the only one working correctly with GtkObject/GObject. It’s still this way, as I don’t know any good solution with Doxygen or other tool to fill the space of gtk-doc.</p>\n<p>Why it was weird? For me, the worst point was the need to maintain a set of files called templates. They were docbook templates that covered some documentation that you couldn’t add to your source code. These files were processed by some scripts that didn’t work very well with version control systems.</p>\n<p>With gtk-doc 1.5 we can avoid the templates, at least for most of the use cases they were required. Now you can add to the source code a commentary like this one:</p>\n<pre class=\"language-cpp\" tabindex=\"0\"><code class=\"language-cpp\"><span class=\"token comment\">/**<br> * SECTION:myclassname<br> * @short_description: This is my short description for #MyClassName<br> * @see_also: #Other, #AnyOtherOne<br> *<br> * It's a long description about this and that, and other things. You can write<br> * about many things and enter sgml tags like:<br> * &lt;itemizedlist>&lt;listitem>an itemized list&lt;/listitem>&lt;itemizedlist><br> */</span></code></pre>\n<p>With this structure you remove at last the templates requirement. One additional note. As there was a bug in gtk-doc 1.5, the general makefile scripts provided don’t work with no templates. You have to add a rule to the documentation <code>Makefile.am</code> like this:</p>\n<pre class=\"language-makefile\" tabindex=\"0\"><code class=\"language-makefile\"><span class=\"token target symbol\">tmpl/*.sgml</span><span class=\"token punctuation\">:</span> scan-build.stamp</code></pre>\n<p>Another interesting point is using <code>gtk-doc.make</code>. It’s a script provided by the gtk-doc distribution providing general structure for gtk-doc processing. You can use it adding this line to the <code>Makefile.am</code>:</p>\n<pre class=\"language-makefile\" tabindex=\"0\"><code class=\"language-makefile\"><span class=\"token keyword\">include</span> <span class=\"token variable\">$</span><span class=\"token punctuation\">(</span>top_srcdir<span class=\"token punctuation\">)</span>/gtk-doc.make</code></pre>\n",
			"date_published": "2006-04-25T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/04/24/latex-make-enters-debian/",
			"url": "https://blogs.igalia.com/dape/2006/04/24/latex-make-enters-debian/",
			"title": "latex-make enters debian",
			"content_html": "<p>For all of you folks that always complain about how difficult it is to create latex documents with those complex Makefiles compiling figures, linking eps’s, doing bad works to generate pdf or ps automatically, <a href=\"http://packages.debian.org/unstable/tex/latex-make\">latex-make</a> is here.</p>\n<p>It’s a makefile include file to help you compile and track dependencies of a latex document. For example, if you want to compile a latex-only document (of one or more files), you only have to create a Makefile with one line. Imagine the master tex file is called <code>mydocument.tex</code>. Then you would write a <code>Makefile</code> with two lines like this:</p>\n<pre class=\"language-tex\" tabindex=\"0\"><code class=\"language-tex\">LU_MASTERS=mydocument.tex<br><br>include LaTeX.mk</code></pre>\n<p>If you want to state which kind of outputs you want it to compile by default, you only have to state the flavors (PS, DVIPDF and/or PDF):</p>\n<pre class=\"language-tex\" tabindex=\"0\"><code class=\"language-tex\">FLAVORS=DVIPDF</code></pre>\n<p>And about the figures. If you have the figures of your document in <code>figures</code> directory, you only have to add:</p>\n<pre class=\"language-tex\" tabindex=\"0\"><code class=\"language-tex\">LU_mydocument_GPATH=figures</code></pre>\n<p>There are more interesting points. It adds a module to handle <code>.fig</code> files and bibliographies.</p>\n",
			"date_published": "2006-04-24T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/03/30/gtknotebook-and-dogtail/",
			"url": "https://blogs.igalia.com/dape/2006/03/30/gtknotebook-and-dogtail/",
			"title": "GtkNotebook and Dogtail",
			"content_html": "<p>This week I’ve been hacking a while around <a href=\"http://people.redhat.com/zcerza/dogtail/\">Dogtail</a>. It’s a Python library intended for implementation of functional tests of applications providing accessibility support through AT-SPI. It lets you check and manipulate the user interface from Python scripts, and it’s easy to integrate it into test cases.</p>\n<p>While I was working, I found that I could not easily browse tabs in a GtkNotebook using dogtail. This issue has been reported in Gnome bugzilla <a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=333887\">(bug report</a>). The problem is:</p>\n<ul>\n<li>Dogtail uses Python SPI bindings (in project pyspi) to access the accessibility layer. This bindings are a direct wrapper over CSPI.</li>\n<li>GtkNotebook accessibility implementation has the <em>page tab list</em> role, and the tabs have the <em>page tab</em> role. The notebook exposes the currently selected tab through the Selection interface. It lets you specify a selection of children of an accessible container.</li>\n<li>Unfortunately neither Python SPI bindings nor Dogtail provide support for this. Then it’s not easy to change the currently active tab of a notebook.</li>\n</ul>\n<p>I’m not the only one that has detected the problem this way. Peter Johanson has sent two bugs and patches to solve the issue and add tab browsing support in Dogtail (<a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=336561\">report in Pyspi</a> and <a href=\"http://bugzilla.gnome.org/show_bug.cgi?id=336562\">report in Dogtail</a>). In parallel I was developing similar patches. As Peter Johansons’ are more complete, I’ve simply added some functionalities this morning to the dogtail patch to provide an easier way to use the selection interface in Dogtail. I’ve uploaded the new patch to the Dogtail bug report.</p>\n<p>Now you can simply write:</p>\n<pre class=\"language-python\" tabindex=\"0\"><code class=\"language-python\">focus<span class=\"token punctuation\">.</span>application<span class=\"token punctuation\">(</span><span class=\"token string\">'gedit'</span><span class=\"token punctuation\">)</span><br>focus<span class=\"token punctuation\">.</span>widget<span class=\"token punctuation\">(</span>roleName<span class=\"token operator\">=</span><span class=\"token string\">'page tab'</span><span class=\"token punctuation\">,</span> name<span class=\"token operator\">=</span><span class=\"token string\">'My tab'</span><span class=\"token punctuation\">)</span><br>focus<span class=\"token punctuation\">.</span>widget<span class=\"token punctuation\">.</span>node<span class=\"token punctuation\">.</span>select<span class=\"token punctuation\">(</span><span class=\"token punctuation\">)</span></code></pre>\n<p>And it raises the desired tab easily. Hope those patches go upstream. I think they’re a good solution for the tab browsing issue.</p>\n",
			"date_published": "2006-03-30T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/03/20/dbus-and-the-power-and-network-managers/",
			"url": "https://blogs.igalia.com/dape/2006/03/20/dbus-and-the-power-and-network-managers/",
			"title": "DBUS and the power and network managers",
			"content_html": "<p>Some time ago, in the <a href=\"http://2005.guadec.es.org\">Guadec-es</a> in Coruña, there was an interesting meeting to talk about interesting enhancement in Gnome desktop. It was mainly centered in i18n issues, but I talked about the way other platforms work in network APIs.</p>\n<p>Briefly, I missed a way to know about network status and manage connections, as other platforms do (for example, we get this kind of API in Microsoft Windows related SDK with <a href=\"http://msdn.microsoft.com/library/default.asp?url=/library/en-us/apippc/html/ppc_cnmn_connection_manager.asp\">Connection Manager API</a>. The same applies to power management.</p>\n<p>But now there’s an interesting work to face this issue. If <a href=\"http://cvs.gnome.org/viewcvs/NetworkManager/\">Network manager</a> and <a href=\"http://cvs.gnome.org/viewcvs/gnome-power-manager/\">Power manager</a> DBUS message providers get more or less standarised in Linux distros, we’ll get an standard API to do this kind of use cases:</p>\n<ul>\n<li>Know if we’re connected to the internet</li>\n<li>Bring up the default network connection to the internet if it’s shut down. (Useful for devices like palms or mobile phones or even PC’s with analog modem internet). Maybe with a standard dialog in Gnome.</li>\n<li>Some knowledge about the network status.</li>\n<li>Know if the PC or device is plugged, and then decide if our application wants to save power or not.</li>\n</ul>\n<p>It seems these managers, and their use of DBUS, are the way to go. Good luck!</p>\n",
			"date_published": "2006-03-20T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/02/13/bed-exams-and-rock-n-roll/",
			"url": "https://blogs.igalia.com/dape/2006/02/13/bed-exams-and-rock-n-roll/",
			"title": "Bed, exams and rock&#39;n&#39;roll",
			"content_html": "<p>Long time since last time I’ve written here. I’ve been sick some days, trying to work in some stuff. I’ve got my midterm exams in Escuela Oficial de Idiomas (State Language School). I’m doing the 5th degree of English (the last one) here in Vigo. Seems they were good exams. The bad point was the writing exam, which was based on a Lost episode I haven’t seen (7th of the second season). Fortunately it seems it didn’t contain any spoilers.</p>\n<p><strong>Local music</strong></p>\n<p>Last weeks I went to two concerts of local groups. The first was in El Ensanche (Vigo), and was performed by <a href=\"http://www.ivanferreiro.com\">Iván Ferreiro</a>, with the collaboration of <a href=\"http://www.xoel.com\">Xoel</a>. They performed as <em>Rai Doriva e as Ferreiro</em>, playing classic songs as Inbetween days by The Cure, some songs from Andrés Calamaro and so on. Good to see them again.</p>\n<p>And yesterday I went to a concert of <a href=\"http://blogs.igalia.com/dape/wp-admin/www.heisselweb.com\">Heissel</a> in Pub Aturuxo (Bueu). Second time I’ve seen them, and again, interesting. This time we talked a bit with some members of the group. They’ve just released their first recording <em>El lugar más feliz</em>(Happiest place).</p>\n<p><strong>Debian Gnome</strong></p>\n<p>Last week Gnome 2.12 has finally hit completely Debian Sid. Big improvement <a href=\"http://blogs.igalia.com/dape/wp-admin/http:www.gnome.org/projects/rhythmbox\">Rhythmbox</a> to my music player. Libnotify popups, queued-to-play songs UI management and other things. Congratulations to developers!</p>\n",
			"date_published": "2006-02-13T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/01/27/request-for-help-in-debian-gnome-team/",
			"url": "https://blogs.igalia.com/dape/2006/01/27/request-for-help-in-debian-gnome-team/",
			"title": "Request for help in Debian GNOME team",
			"content_html": "<p>In <a href=\"http://www.debian.org/News/weekly/2006/04/\">this week release of Debian Weekly News</a> there’s a request for help from Debian GNOME team. You can see a <a href=\"http://qa.debian.org/developer.php?login=pkg-gnome-maintainers@lists.alioth.debian.org\">list of required tasks</a> from Alioth.</p>\n<p>Maybe an oportunity for you folks, if you want to help in such an important task.</p>\n",
			"date_published": "2006-01-27T00:00:00Z"
		}
		,
		{
			"id": "https://blogs.igalia.com/dape/2006/01/26/welcome/",
			"url": "https://blogs.igalia.com/dape/2006/01/26/welcome/",
			"title": "Welcome!",
			"content_html": "<p>Hi guys. Here in Igalia many of us are starting to blog. So welcome to this area!</p>\n",
			"date_published": "2006-01-26T00:00:00Z"
		}
		
	]
}
