Igalia Compilers Team

Est. 2011

Igalia's Compilers Team in 2024

2024 marked another year of exciting developments and accomplishments for Igalia's Compilers team packed with milestones, breakthroughs, and a fair share of long debugging sessions. From advancing JavaScript standards, improving LLVM RISC-V performance, to diving deep into Vulkan and FEX emulation, we did it all.

From shipping require(esm) in Node.js to porting LLVM’s libc to RISC-V, and enabling WebAssembly’s highest optimization tier in JavaScriptCore, last year was been nothing short of transformative. So, grab a coffee (or your preferred debugging beverage), and let’s take a look back at the milestones, challenges, and just plain cool stuff we've been up to last year.

JavaScript Standards #

We secured a few significant wins last year when it comes to JavaScript standards. First up, we got Import attributes (alongside JSON modules) to Stage 4. Import attributes allow customizing how modules are imported. For example, in all JavaScript environments you'll be able to natively import JSON files using

import myData from "./data" with { type: "json" };

Not far behind, the Intl.DurationFormat proposal also reached Stage 4. Intl.DurationFormat provides a built-in way to format durations (e.g., days, hours, minutes) in a locale-sensitive manner, enhancing internationalization support.

We also advanced ShadowRealm, the JavaScript API that allows you to execute code in a fresh and isolated environment, to Stage 2.7, making significant progress in resolving the questions about which web APIs should be included. We addressed open issues related to HTML integration and ensured comprehensive WPT coverage.

We didn't stop there though. We implemented MessageFormat 2.0 in ICU4C; you can read more about it in this blog post.

We also continued working on AsyncContext, an API that would let you persist state across awaits and other ways of running code asynchronously. The main blocker for Stage 2.7 is figuring out how it should interact with web APIs, and events in particular, and we have made a lot of progress in that area.

Meanwhile, the source map specification got a major update, with the publication of ECMA-426. This revamped spec, developed alongside Bloomberg, brings much-needed precision and new features like ignoreList, all aimed at improving interoperability.

We also spent time finishing Temporal, the modern date and time API for JavaScript—responding to feedback, refining the API, and reducing binary size. After clearing those hurdles, we moved forward with Test262 coverage and WebKit implementation.

Speaking of Test262, our team continued our co-stewardship of this project that ensures compatibility between JavaScript implementations across browsers and runtimes, thanks to support from the Sovereign Tech Fund. We worked on tests for everything from resizable ArrayBuffers to deferred imports, keeping JavaScript tests both thorough and up to date. To boost Test262 coverage, we successfully ported the first batch of SpiderMonkey's non-262 test suite to Test262. This initiative resulted in the addition of approximately 1,600 new tests, helping to expand and strengthen the testing framework. We would like to thank Bloomberg for supporting this work.

The decimal proposal started the year in Stage 1 and remains so, but it has gone through a number of iterative refinements after being presented at the TC39 plenary.

It’s was a productive year, and we’re excited to keep pushing these and more proposals forward.

Node.js #

In 2024, we introduced several key enhancements in Node.js.

We kicked things off by adding initial support for CPPGC-based wrapper management, which helps making the C++/JS corss-heap references visible to the garbage collector, reduces risks of memory leaks/use-after-frees, and improves garbage collection performance.

Node.js contains a significant amount of JavaScript internals, which are precompiled and preloaded into a custom V8 startup snapshot for faster startup. However, embedding these snapshots and code caches introduced reproducibility issues in Node.js executables. In 2024, We made the built-in snapshot and code cache reproducible, which is a major milestone in making the Node.js executables reproducible.

To help user applications start up faster, we also shipped support for on-disk compilation cache for user modules. Using this feature, TypeScript made their CLI start up ~2.5x faster, for example.

One of the impactful work we've done in 2024 was implementing and shipping require(esm), which is set to accelerate EcmaScript Modules (ESM) adoption in the Node.js ecosystem, as now package maintainers can ship ESM directly without having to choose between setting up dual shipping or losing reach, and it allows many frameworks/tools to load user code in ESM directly instead of doing hacky ESM -> CJS conversion , which tend to be bug-prone, or outright rejecting ESM code. Additionally, we landed module.registerHooks() to help the ecosystem migrate away from dependency of CJS loader internals and improve the state of ESM customization.

We also shipped a bunch of other smaller semver-minor features throughout 2024, such as support for embedded assets in single executable applications, crypto.hash() for more efficient one-off hashing, and v8.queryObjects() for memory leak investigation, to name a few.

Apart from project work, we also co-organized the Node.js collaboration summit in Bloomberg's London office, and worked on Node.js's Bluesky content automation for a more transparent and collaborative social media presence of the project.

You can learn more about the new module loading features from our talk at ViteConf Remote, and about require(esm) from our NodeConf EU talk.

JavaScriptCore #

In JavaScriptCore, we've ported BBQJIT, the first WebAssembly optimizing tier to 32-bits. It should be a solid improvement over the previous fast-and-reasonably-performant tier (BBQ) for most workloads. The previous incarnation of this tier generated the Air IR (the low-level); BBQJIT generates machine code more directly, which means JSC can tier-up to it faster.

We're also very close to enabling (likely this month) the highest optimizing tier (called "OMG") for WebAssembly on 32-bits. OMG generates code in the B3 IR, for which JSC implements many more optimizations. B3 then gets lowered to Air and finally to machine code. OMG can increase peak performance for many workloads, at the cost of more time spent on compilation. This has been a year-long effort by multiple people.

LLVM #

In LLVM's RISC-V backend, we added full scalable vectorization support for the BF16 vector extensions zvfbfmin and zvfbfwma. This means that code like the following C snippet:

void f(float * restrict dst, __bf16 * restrict a, __bf16 * restrict b, int n) {
for (int i = 0; i < n; i++)
dst[i] += ((float)a[i] * (float)b[i]);
}

Now gets efficiently vectorized into assembly like this:

	vsetvli	t4, zero, e16, m1, ta, ma
.LBB0_4:
	vl1re16.v	v8, (t3)
	vl1re16.v	v9, (t2)
	vl2re32.v	v10, (t1)
	vfwmaccbf16.vv	v10, v8, v9
	vs2r.v	v10, (t1)
	add	t3, t3, a4
	add	t2, t2, a4
	sub	t0, t0, a6
	add	t1, t1, a7
	bnez	t0, .LBB0_4

On top of that, we’ve made significant strides in overall performance last year. Here's a bar plot showing the improvements in performance from LLVM 17 last November to now.

Note: This accomplishment is the result of the combined efforts of many developers, including those at Igalia!

Bar graph

We also ported most of LLVM's libc to rv32 and rv64 in September (~91% of functions enabled). We presented the results at LLVM Developer's meeting 2024, you can watch the video of the talk to learn more about this.

Pie chart

Shader compilation (Mesa IR3) and dynamic binary translation (FEX-Emu) #

As we look ahead, we are excited to continue driving the evolution of these technologies while collaborating with our amazing partners and communities.