Archive for the ‘Igalia’ Category

Optimizing shader assembly instruction on Mesa using shader-db (II)

Friday, September 18th, 2015

On my previous post I mentioned that I have been working on optimizing the shader instruction count for specific shaders guided by shader-db, and showed one specific example. In this post I will show another one, slightly more complex on the triaging and solution.

Some of the shaders with a worse instruction count can be foundn at shader-db/shaders/dolphin. Again I analyzed it in order to get the more simpler shader possible with the same issue:

   #version 130

   in vec2 myData;

   void main()
      gl_Position = vec4(myData, 3.0, 4.0);

Some comments:

  • It also happens with uniforms (so you can replace “in vec2″ for “uniform vec2″)
  • It doesn’t happens if you use directly the input. You need to do this kind of “input and const” combination.

So as in my previous post I executed the compilation, using the option optimizer. In the case of IR I got the following files:

  • VS-0001-00-start
  • VS-0001-01-01-opt_reduce_swizzle
  • VS-0001-01-04-opt_copy_propagation
  • VS-0001-01-07-opt_register_coalesce
  • VS-0001-02-02-dead_code_eliminate
  • VS-0001-02-07-opt_register_coalesce

Being this the desired outcome (so the content of VS-0001-02-02-dead_code_eliminate):

0: mov m3.z:F, 3.000000F
1: mov m3.w:F, 4.000000F
2: mov m3.xy:F, attr17.xyyy:F
3: mov m2:D, 0D
4: vs_urb_write (null):UD

Unsurprisingly it is mostly movs. In opposite to the shader I mentioned on my previous post, where on both cases the same optimizations were applied, in this case the NIR path doesn’t apply the last optimization (the second register coalesce). So this time I will focus on the starting point and the state just after the dead code eliminate pass.

So on IR, the starting point (VS-0001-00-start) is:

0: mov vgrf2.0.x:F, 3.000000F
1: mov vgrf2.0.y:F, 4.000000F
2: mov, vgrf2.xxxy:F
3: mov vgrf1.0.xy:F, attr17.xyxx:F
4: mov vgrf0.0:F, vgrf1.xyzw:F
5: mov m2:D, 0D
6: mov m3:F, vgrf0.xyzw:F
7: vs_urb_write (null):UD

and the state after the dead code eliminate is the following one:

0: mov vgrf1.0.z:F, 3.000000F
1: mov vgrf1.0.w:F, 4.000000F
2: mov vgrf1.0.xy:F, attr17.xyyy:F
3: mov m2:D, 0D
4: mov m3:F, vgrf1.xyzw:F
5: vs_urb_write (null):UD

On NIR, the starting point is:

0: mov vgrf2.0.x:F, 3.000000F
1: mov vgrf2.0.y:F, 4.000000F
2: mov vgrf0.0.xy:F, attr17.xyyy:F
3: mov vgrf1.0.xy:D, vgrf0.xyzw:D
4: mov, vgrf2.xxxy:D
5: mov m2:D, 0D
6: mov m3:F, vgrf1.xyzw:F
7: vs_urb_write (null):UD

and the state after the dead code eliminate is the following one:

0: mov vgrf2.0.x:F, 3.000000F
1: mov vgrf2.0.y:F, 4.000000F
2: mov m3.xy:D, attr17.xyyy:D
3: mov, vgrf2.xxxy:D
4: mov m2:D, 0D
5: vs_urb_write (null):UD

The first difference we can see is that although the instructions are basically the same at the starting point, the order is not the same. In fact if we check the different intermediate steps (I will not show them here to avoid a post too long), although the optimizations are the same, how and which get optimized are somewhat different. One could conclude that the problem is this order, but if we take a look to the final step on the NIR assembly shader, there isn’t anything clearly indicating that that shader can’t be simplified. Specifically instruction #3 could go away if instruction #0 and #1 writes directly to m3 instead of vgrf2, that is what the IR path does. So it seems that the problem is on the register coalesce optimization.

As I mentioned, there is a slight order difference between NIR and IR. That leads that on the NIR case, between instruction #3 and #0/#1 there is another instruction, that is in a different place on IR. So my first thought was that the optimization was only checking against the immediate previous instruction. Once I started to look to the code it showed that I was wrong. For each instruction, there was a loop checking for all the previous instructions. What I noticed is that on that loop, all the checks that rejected one previous instruction was a break. So I initially thought that perhaps one of those breaks was in fact a continue. This seemed to be confirmed when I did the quick hack of replace everything for continues. It proved wrong as soon as I saw all the piglit regressions I had in hand. So after that I did the proper, and do a proper debug. So using gdb, the condition it was stopping the optimization to check previous instructions was the following one:

/* If somebody else writes our destination here, we can't coalesce
* before that.
if (inst->dst.in_range(scan_inst->dst, scan_inst->regs_written))

Probably the code is hard to understand out of context, but the comment is clear. When we coalesce two instructions, that is possible when the previous one writes to a register we are reading on the current instruction. But obviously, that can’t be done if there is a instruction in the middle that writes on the same register. And it is true that is what is happening here. If you look at the final state of the NIR path, we want to coalesce instruction #3 with instruction #1 and #0, but instruction #2 is writing on m3 too.

So, that’s over? Not exactly. Remember that IR was able to simplify this, and it can’t be only because the order was different. If you take a deeper look to those instructions, there are some x, y, z, w after the register names. Those report in which channels those instructions are writing. As I mentioned on my previous post, this work is about providing a NIR to vec4 pass. So those registers are vectors. Instruction #3 can be read as “move the content of components x and y from register vgrf2 to components z and w of register m3″.  And instruction #2 can be read as “move the content of components x and y from register attr17 to components x and y of register m3″. So although we are writing to the same destination, we are writing to different components, meaning that it would be safe to do the coalescing. We just need to be sure that there isn’t any component overlap between current instruction and the previous one we are checking against. Fortunately the registers already save in which registers they are writing on in a variable called “writemask”. So we only need to change that code for the following one:

/* If somebody else writes the same channels of our destination here,
* we can't coalesce before that.
if (inst->dst.in_range(scan_inst->dst, scan_inst->regs_written) &&
(inst->dst.writemask & scan_inst->dst.writemask) != 0) {

The patch with this change was sent to the mesa list (here), approved and pushed to master.

Final words

So again, a problem what was easier to write the solution that to write the patch. But in any case, it showed a significant improvement. Using shader-db tool comparing before and after the patch:

total instructions in shared programs: 1781593 -> 1734957 (-2.62%)
instructions in affected programs:     1238390 -> 1191754 (-3.77%)
helped:                                12782
HURT:                                  0

Optimizing shader assembly instruction on Mesa using shader-db

Monday, September 14th, 2015

Lately I have been working on Mesa. Specifically I have been working with my fellow igalians Eduardo Lima and Antía Puentes to provide a NIR to vec4 pass to the i965 backend. I will not go too much in the details, but in summary, NIR is a new intermediate representation for Mesa. Intermediate as being in the middle of the OpenGL GLSL language used for shaders, and the final GPU machine instructions for each specific Mesa backends. NIR is intended to replace the previous GLSL IR, and in some places it is already done. If you are interested on the details, take a look to the NIR announcement and NIR documentation page.

Although the bug is still open, Mesa master already has the functionality for this pass, and in fact, is the default now. This new NIR pass provides the same functionality that the one available with the old GLSL IR pass (from now, just IR). This was properly tested with piglit. But although the total instruction count in general have improved, we are getting worse instruction count compiling some specific known shaders if we use NIR. So the next step would be improve this. This is an ongoing effort, like these patches from Jason Ekstrand, but I would like to share some of my experience so far.

In order to guide this work, we have been using shader-db. shader-db is a shader database, with a executable to compile those shaders, and a tool to compare two executions of that compilation. Usually it is used to verify that the optimization that you are implementing is really improving the instruction count, or even to justify your change. Several Mesa commits include the before and after shader-db statistics. But in this case, we have been using it as a guide of what we could improve. We compile all the shaders using IR and using NIR (using the environment variable INTEL_USE_NIR), and check in which shaders there are a instruction count regression.

Case 1: subtraction needs an extra mov.

Ok, so one of the shaders with a worse instruction count is humus-celshading/4.shader_test. After some analysis of what was the problem, I got a simpler shader with the same problem:

in vec4 inData;

void main(){
gl_Position = gl_Vertex - inData;

This simple shader needs one extra instructions using NIR. So yes, a simple subtraction is getting worse. FWIW, this is the desired final shader assembly:

0: add m3:F, attr0.xyzw:F, -attr18.xyzw:F
1: mov m2:D, 0D
2: vs_urb_write (null):UD

Note that there isn’t an assembly subtraction instruction, but it is represented as negating the second parameter and use an add (this seems captain obvious information here, but will be relevant later).

So at this point one option would be start to look at the backend (remember, i965) code for vec4, specifically the optimizations, and check if we see something. Those optimization are called at brw_vec4.cpp. Those optimizations are in general common to any compiler, like dead code elimination, copy propagation, register coalescing, etc. And usually they are executed several times in several passes, and some of those are simplifications to be used by other optimizations (for example, if your copy propagation optimization pass works, then it is common that your dead code elimination pass will get an instruction out). So with all those optimizations and passes, how do you find the problem? Although it is a good idea read the code for those optimizations to know how they work, it is usually not enough to know where the problem is. So this is again a debug problem, and as usually, you want to know what it is happening step by step.

For this I executed again the compilation, with the following environment variable:

INTEL_USE_NIR=1 INTEL_DEBUG=optimizer ./run subtraction.shader_test

This option prints out to a file the shader assembly compiled at each optimization pass (if applied). So for example, I get the following files for both cases:

  • VS-0001-00-start
  • VS-0001-01-04-opt_copy_propagation
  • VS-0001-01-07-opt_register_coalesce
  • VS-0001-02-02-dead_code_eliminate

So in order to get the final shader assemlby, it was executed a copy propagation, a register coalesce, and a dead code eliminate. BTW, I found that environment variable while looking at the code. It is not listed on the mesa envvar page, something I assume is a bug.

So I started to look at the differences between the different steps. Taking into account that on both cases, the same optimizations were executed, and in the same order, I started looking for differences between one and the other at any step. And I found one difference on the copy propagation.

So let’s see the starting point using IR:

0: mov vgrf2.0:F, -attr18.xyzw:F
1: add vgrf0.0:F, attr0.xyzw:F, vgrf2.xyzw:F
2: mov m2:D, 0D
3: mov m3:F, vgrf0.xyzw:F
4: vs_urb_write (null):UD

And the outcome of the copy propagation:

0: mov vgrf2.0:F, -attr18.xyzw:F
1: add vgrf0.0:F, attr0.xyzw:F, -attr18.xyzw:F
2: mov m2:D, 0D
3: mov m3:F, vgrf0.xyzw:F
4: vs_urb_write (null):UD

And the starting point using NIR:

0: mov vgrf0.0:UD, attr0.xyzw:UD
1: mov vgrf1.0:UD, attr18.xyzw:UD
2: add vgrf2.0:F, vgrf0.xyzw:F, -vgrf1.xyzw:F
3: mov m2:D, 0D
4: mov m3:F, vgrf2.xyzw:F
5: vs_urb_write (null):UD

And the outcome of the copy propagation:

0: mov vgrf0.0:UD, attr0.xyzw:UD
1: mov vgrf1.0:UD, attr18.xyzw:UD
2: add vgrf2.0:F, attr0.xyzw:F, -vgrf1.xyzw:F
3: mov m2:D, 0D
4: mov m3:F, vgrf2.xyzw:F
5: vs_urb_write (null):UD

Although it is true that the starting point for NIR already have one extra instruction compared with IR, that extra one gets optimized on following steps. What caught my attention was the difference between what happens with the instruction #1 on the IR case, compared with the equivalent instruction #2 on the NIR case (the add). On the IR case, copy propagation is able to propagate attr18 from the previous instruction. So is easy to see that this could be simplified on following optimization steps. But that doesn’t happen on the NIR case. On NIR, instruction #2 after the copy propagation remains the same.

So I started to take a look to the implementation of the copy propagation optimization code (here). Without entering into details, this pass analyses each instruction, comparing them with the previous ones in order to know if it can do a copy propagation. So I looked why with that specific instruction the pass concludes that it can’t be done. At this point you could use gdb, but I used some extra printfs (sometimes they are useful too). So I find the check that rejected that instruction:

bool has_source_modifiers = value.negate || value.abs;


if (has_source_modifiers && value.type != inst->src[arg].type)
    return false;

That means that if the source of the previous instruction you are checking against is negated (or has an abs), and the types are different, you can’t do the propagation. This makes sense, because negation is different on different types. If we go back to check the shader assembly output, we find that it is true that the types (those F, D and UD just after the registers) are different between the IR and the NIR case. Why we didn’t worry before? Why this was not failing on any piglit test? Well, because if you take a look more carefully, the instructions that had different types are the movs. In both cases, the type is correct on the add instruction. And in a mov, the type is somewhat irrelevant. You are just moving raw data from one place to the other. It is important on the ALU operation. But in any case, it is true that the type is wrong on those registers (compared with the original GLSL code), and as we are seeing, is causing some problems on the optimization passes. So next step: check where those types are filled.

Searching a little on the code, and using gdb this time, this is done on the function nir_setup_uniforms at brw_vec4_nir.cpp,while creating a source register variable. But curiously it is using the type that came from NIR:

src_reg src = src_reg(ATTR, var->data.location + i, var->type);

and printing out the content of var->type with gdb, it properly shows the type used at the GLSL code. If we go deeper to the src_reg constructor:

src_reg::src_reg(register_file file, int reg, const glsl_type *type)

    this->file = file;
    this->reg = reg;
    if (type && (type->is_scalar() || type->is_vector() || type->is_matrix()))
        this->swizzle = brw_swizzle_for_size(type->vector_elements);
        this->swizzle = BRW_SWIZZLE_XYZW;

We see that the type is only used to fill the swizzle. But if we compare this to the equivalent code for a destination register:

dst_reg::dst_reg(register_file file, int reg, const glsl_type *type,
unsigned writemask)

    this->file = file;
    this->reg = reg;
    this->type = brw_type_for_base_type(type);
    this->writemask = writemask;

dst_reg is is also filling internally the type, something src_reg is not doing. So at this point the patch is straightforward. It is just fill src_reg->type using the constructor type parameter. The patch was approved and is already on master.

Coming up next

At the end, I didn’t need to improve at all any of the optimization passes, as the bug was elsewhere, but the debugging steps for this work are still the same. In fact it was the usual bug that was harder to find (for simplicity I summarized the triaging) that to solve. For the next blog post, I will explain how I worked on another instruction count regression, somewhat more complex, that needed a change on one of the optimization passes.



Monday, July 21st, 2014

I'm going to GUADEC 2014 badgeLeaving tomorrow, as I will be first at the Evince Hackfest

GNOME 3.12.1 out: PDF accessibility progress

Monday, April 28th, 2014

Welcome to a new “GNOME 3.12 is out blog post”, somewhat late because I wanted to focus on 3.12.1 instead of the usual 3.12.0, and because I was away for several days due to Easter holidays.

Flowers and a mill at Keukenhof

Flowers and a mill at Keukenhof

As I said just after 3.10, Antía worked hard on adding keyboard navigation support to Evince, and Adrián provided an implementation of the tagged PDF specification for Poppler. The plan for the 3.12 cycle was to build upon their work in order to improve Evince’s accessibility support.

Thanks to the tagged-PDF implementation for Poppler, we were able to start experimenting with tagged PDF documents in Evince, and playing with all the cool things that tagged PDFs bring to the table. Finally we have available information about if we are in a paragraph, where a list starts, different levels of headings, and pretty much anything else one can put in an element tag. But while Adrián and Carlos García kept working on getting their patches pushed upstream (more than 15 patches were pushed during this cycle), Joanmarie Diggs and I realized that “only” a bare/plain implementation of this specification would be a hard animal to tame in order to be used by assistive technologies: Additional parsing and structuring will be needed in Poppler to properly implement ATK support in Evince.

Additionally, having working keyboard support in Evince made it finally possible to test real-world document accessibility with Orca (as opposed to just Accerciser). But in doing so, we found that the existing ATK support was incomplete or wrong in several places. So even the more basic PDF documents, those that should be also accessible without tagged PDF, were not properly accessible. Taking all this into account, we decided to focus on fixing the bugs in Evince’s core accessibility support as doing so would make all PDFs more accessible, but at the same time to continue working on the tagged-PDF support in order to start developing a concrete list of the improvements we will need added to Poppler.

So the main tasks on Evince during this cycle were:

  • Reimplement AtkText
  • Expose all document pages to the accessibility tools, not only the current one.
  • Implementation of AtkDocument
  • Some fixes to caret-navigation and hyperlink management

As a result of these changes:

  • Several accessibility-triggered crashes have been eliminated
  • Orca’s SayAll feature now works with Evince
  • Prosody when reading documents with Orca has been improved
  • The caret can be positioned and text selected via AT-SPI2

Some of this work was not quite in time for the 3.12.0 release, but has been included in 3.12.1. In addition, we are continuing to work on accessibility-related bug fixes which we anticipate will be included in 3.12.2.

As for what’s next: We encourage Orca users to give Evince a try and help us identify the bugs that remain in Evince’s core accessibility support. Anything that they find will be added to our high-priority TODO list. In the meantime, we will continue to work on enhancing Poppler’s tagged-PDF support and then exposing that structural information through Evince to assistive technologies.

Finally, I would like to thank the GNOME Foundation and the Friends of GNOME supporters for their contributions towards making a more accessible GNOME, as this work would not be possible without them.

Sponsored by GNOME Foundation

GNOME Accessibility Update: 3.10 Release, Montreal Summit and Plans for 3.12

Monday, October 21st, 2013

3.10 is out, what’s new about accessibility?

As you probably already know, GNOME 3.10 was released several weeks ago, with lots of new accessibility goodness:

  • Magnifier focus and caret tracking: Finally, the focus and caret tracking feature of GNOME Shell’s magnifier has landed. Now the magnified view automatically follows the writing caret and changes in focus so you can always see where you are without having to move the mouse. You can read more about this work in this post written by Magdalen Berns, the GSoC student that implemented this feature.
  • GNOME Shell improvements: One of the new features of GNOME 3.10 is the new System Status Menu. This menu includes several new visual elements which were reviewed and enhanced in order to ensure they would be fully keyboard navigable and accessible through accessibility tools like Orca. Keyboard navigability was also added to the calendar pop-down in the shell panel, though admittedly there is some room for improvement which we hope to address in GNOME 3.12.
  • PDF accessibility: Evince keyboard support has landed. Now users can press F7 to activate a caret for navigation and selection within the document being read. This new support was also made to work with Orca, so that PDF content can be accessed by users who are blind directly in Evince. Support for tagged PDFs is currently being added to Poppler and will be used to further improve accessibility support in Evince. This work is being done by Igalia, having been funded by the Friends of GNOME accessibility campaign. You can read more about this work on Antía Puentes’s “Accessibility in Evince” and Adrián Pérez’s “Tagged-PDF: Coming to a Poppler near you” blog posts.
  • A new global keyboard shortcut for Orca: Now the screen reader can be easily turned on/off at any time by just pressing Super+Alt+S. This might seem like a small change, but it is in fact a really big step that allows more distros to be more accessible out of the box.
  • ATK deprecations (a lot): While this does not directly affect the user experience, over time it will make developers’ lives easier, and will also lead to cleaner and more easily maintainable code. The first one is the simplification of what used to be extremely confusing and hard-to-implement methods to get a substring from a text related object. We had been talking about this problem for a long time, and finally agreed upon the new API at this year’s GUADEC. Mario Sánchez then added the new method to ATK and AT-SPI2 and also implemented it in WebKitGTK. The other major change is related to focus handling. One signal and six methods were deprecated, simplifying the situation *a lot* in that regard.

What’s next?

Captain Obvious to the rescue: 3.12. Although 3.10 was better than 3.8, our plan is making 3.12 even better. Like a lot of other teams, we started the new cycle listing, analyzing and prioritizing everything we need to do, using the Montreal Summit as a kickoff for 3.12 and making the most of being able to talk face-to-face with other GNOME developers. There are always changes to keep up with: new applications, new widgets, and new deprecations. But right now, the more important change in progress is Wayland. A lot of work was done for 3.10, so that we have the possibility to run GNOME 3 using a Wayland session. It is still not production ready (in my humble opinion, it is alpha status), but the plan is filling the gaps for 3.12, and that includes accessibility.

But Wayland is not the only topic for 3.12. During the weekly Accessibility Team meeting after the summit, we discussed all the improvements planned for 3.12:

  • Complete Wayland support
  • Create a new asynchronous API for AT-SPI2
  • Add configuration UI for some already-implemented magnifier features (focus/caret tracking and tinting)
  • Homogenize keyboard navigation within GNOME (to be proposed)
  • Update ATK implementations (e.g. GTK, Clutter) for deprecations and new API
  • Implement tagged PDF support in Evince

Finally, I would like to thank my employer Igalia for its continued support of my work on GNOME accessibility as part of my job duties, and the GNOME Foundation for sponsoring my trip to the Montreal Summit.

Sponsored by GNOME FoundationIgalia: Free Software Engineering

Going back from FOSDEM 2013

Monday, February 4th, 2013

Last week planet GNOME was full of “Going to” posts. This is my traditional “Coming from” post. Most of my events-related posts are written after the event itself. I blame the need to write slides. Anyhow, I’m back from FOSDEM 2013.

This is the fourth time that I’ve attended FOSDEM, making it my second most-visited Free Software event  (GUADEC being the first), and the third time that I have given a presentation  there.

This year, instead of my usual “GNOME Accessibility State of the Union” talk, I spoke about what is arguably one the biggest changes in Accessibility for Free desktop environments: “How GNOME Obsoleted its “Enable Accessibility” Setting”   (aka “Accessibility Always on”). The turnout was great in spite of it being the first session Sunday morning. I don’t see the slides uploaded on the  FOSDEM page, so I’ve provided them here for those of you unable to attend.

Additionally,  I attended several interesting talks (the number of interesting  talks at FOSDEM is overwhelming), met people that I only see at this kind of event and also participated in a small (somewhat informal) release-team meeting.

Finally I would like to thank Igalia for sponsoring my trip this year.

Going back home

Tuesday, October 9th, 2012

In some hours I will go back to Spain. The last three days I was attending the Boston Summit. I will not repeat what happened there as we have really good summaries written by Colin Walters (here) and Matthias Clasen (here and here). Anyway want to comment that was an really good experience (if we skip all that airports/flights delayed thing).

The last time that I attended he Boston summit was on 2009. Reading my “going back home” post that I wrote then, is funny to see how things have changed since then. Although we already knew that we wanted something on that direction for the upcoming GNOME 3, on 2009 gnome-shell was still a kind of proof-of-concept desktop. At that moment I was starting to try to use then then somewhat experimental Cally module on gnome-shell, and at the post I mentioned that Emmanuele Bassi was ok with adding accessibility related API on Clutter. Now we don’t have Cally as a isolated module anymore. All that code is part of Clutter source code. Since GNOME 3.0 Gnome Shell is the default desktop (and if we finally drop fallback mode, for 3.X Gnome Shell will be THE DESKTOP). Since 3.4 Orca users can interact with the shell, and included several other accessibility-related features. 3.6 showed a improvement, and 3.8 will be event better. A lot of things have changed since 2009, good to see that have “changed” in this case is the same that “improved”.

Finally, I want to thanks GNOME Foundation for sponsoring my trip, and to Igalia for allowing me to attend the event!

GNOME Foundation

Last day of summer

Friday, September 21st, 2012

“One of these mornings you’re gonna rise up singing…”

UX-Hackfest + GUADEC

This  year I was present at the UX-Hackfest which was held at Igalia’s headquarters in Coruña. While I’m not really very involved with  the UX team, having this event in the same place I work every day gave me the opportunity to see first-hand how the designers work, take an in-depth look at the features they are  working on, and be the voice of the Accessibility Team if required. For me it was a very good experience. And in the end I didn’t need to be that voice, as several of the designs were already taking accessibility needs into account.

In other stay-at-home news, I also attended GUADEC which took place here in Coruña, the city I have been living in most of my life, and at the same Faculty where I studied computer science several years ago. It was great having GUADEC so close to home this year, no flights and so on.

As everyone has already mentioned in their posts, this year GUADEC was really successful, from the point of view of the organization (kudos to all the people involved) as well as the content and community. Although it is true that there were a lot of challenges, we had a lot of energized people working hard to make the project a success.

From my side this GUADEC was really busy. After a release-team meeting, we gave a “five minute” presentation at the AGM that, unlike most of the other presentations, started a little debate. Some people feel that the challenges faced in producing GNOME require someone pushing the community in the short-medium-long term, and some think that the release team is that “someone”. Whether or not this is the case, and how to do that pushing if it is, is something that we are still debating.

I also gave a presentation at the AGM as a representative of the Accessibility Team, summarizing the work done and the features and fixes which will be included in 3.6. I gave a longer and more detailed talk about the same (slides here). As probably you already know, for GNOME 3.6 the most important accessibility feature included is “Accessibility always on”. Instead of needing to activate the accessibility support, log out and log in again, now the support is always there.

Becoming a WebKitter too: Thanks to F123&Mais Diferenças

After GUADEC, Joanmarie and I began working on the accessibility support of WebKitGTK/Epiphany, specifically the support with Orca, resuming the development already done by Mario.  This work will be included in the next GNOME stable release, so Epiphany 3.6  will have a better accessibility support. But not only Epiphany will benefit: Because most of the work was done in WebKitGTK, all applications based on WebKitGTK will have improved accessibility support. This includes Geary, Evolution, and Yelp.

I would like to thank F123&Mais Diferenças for supporting this development and continuing to help improve the accessibility support on the GNOME platform.

GNOME 3.6 is coming

There have already been a number of posts (not to mention the in-progress release notes) describing what’s new for GNOME 3.6. In fact, some even talk about accessibility support (like this one from Matthias). So I won’t repeat what they have said. The tl;dr version is this:

  • Accessibility always on
  • WebKitGTK accessibility support improved
  • GNOME Shell improved (wifi and power top panel menu items now accessible)
  • All the stack reviewed and improved

Next?: going to Randa

Tomorrow I will be heading off for the KDE Sprint in Randa.  This year some of the developers will be discussing and hacking on accessibility. Because KDE and GNOME share several key components of the accessibility stack, collaborating with the KDE community to improve accessibility support is essential. Mind you, this will not be the first time such collaboration has occurred. In previous AT-SPI2/ATK hackfests we were joined by Frederik Gladhorn and José Millán. This time I will be the GNOME guy attending a KDE event. I am looking forward to once again working with Frederik and José, meeting other KDE accessibility developers, and continuing our joint efforts towards more accessible free desktop environments. Many thanks to the KDE community, and especially to Mario Fux, for the invitation and funding to attend this event. And talking about funding (better late that never), in this link you can find a pledgie created to fund this Sprint.

“… and you’ll spread your wings and you’ll take to the sky”


GNOME 3.4: Finally Orca+GNOME3

Friday, March 30th, 2012

GNOME 3.4 is here!

Well, this is not really something new, but GNOME 3.4 was released. And as the release notes explains and Mathias Clasen advanced on his blog, one of the things improved was the screen reader support. On some of my old posts, I already mentioned how some stuff were slowly being added (like here and here). Although that work was also required, was mostly low level ATK stuff, and not really impressive from the POV of the user. After that work the outcome was GNOME Shell exposing some info through the accessibility technologies and Orca knowing that GNOME Shell is there. But it was mostly babbling.

For GNOME 3.4 we finally made a whole review of the GNOME Shell UI. Now most of those UI elements expose the proper combination of name, role (if the element is a button or not) and state (if a toggle button is checked or not). Adding the improvement on the stability and performance of the accessibility technologies (both at-spi2 and Orca), we have now something that we can ask Orca users to test. You can take a look to the result on this video:

[Video in Vimeo] [Full quality video in Ogv]

And now?

This is the first release of GNOME 3.4 with a proper Orca support, so the first one that we can proudly show to our users, so for sure we will get some feedback and some additional stuff to improve. But after all, GNOME has a bugzilla for a reason. During this cycle some users reported some issues with gdm, so we would require to review that part. For sure GNOME 3.6 will have a better accessibility support.

In the same way, don’t forget that GNOME Shell has other accessibility related features. Since GNOME 3.2 has a built-in magnifier, and now, with GNOME 3.4 it is fully configurable on the Universal Access Settings dialog. And for 3.6 it will have brightness and contrast functionality (something that Joseph Scheuhammer finished just after the code freeze) and hopefully focus-tracking.

Acknowledges and conclusions

This release shows how having people with some time to work on the accessibility stack can make things improve. Gtk accessibility is in a better shape thanks to Benjamin Otte. at-spi2 thanks to Mike Gorse. GNOME Shell magnifier thanks to Joseph Scheuhammer. Orca thanks to Joanmarie. Although GNOME 3.2 was an step over GNOME 3.0, the fact is it is more noticeable on GNOME 3.4, and it is mostly due the fact that for GNOME 3.2 (and perhaps 3.0) people were more busy on other stuff. Lesson learned: we need to find a way to keep people working on accessibility and getting more people.

Finally, I would like to mention that this is the first GNOME release since Joanmarie Diggs joined Igalia. Having her on Igalia and getting a release with a noticeable improvement on the accessibility support for GNOME Shell, and the performance and stability of Orca, is not a mere coincidence. Her experience, energy and motivation was a push to the work that Igalia has being doing.

ATK/AT-SPI2 Hackfest 2012: Days 2,3,4,5

Tuesday, January 24th, 2012

Well, as in the previous hackfest, I planned to make a post per day, but in the end I didn’t. Next time I will not make any plan.

Yesterday we had our last day of the hackfest. It was, in my opinion, a productive Hackfest where each one had the opportunity to work with other people working in the same field, and discuss several topics regarding the current situation. If you want to know all of the details and conclusions, you could read a minutes-like brainstorming document on the wiki. But If you were to ask me to pick just one, I would mention the discussion about enabling the accessibility support by default. The main conclusion was stop to use the atk-bridge as a module, and instead have that feature integrated. Doing that has several advantages, including having it compiled (and thus tested) when someone compiles GTK+ or any GTK+ app, and not only when you want to compile the “accessibility stuff”. The implementation details are still not clear. Convert atk-bridge to a library and add a dependency on GTK+ and others? Integrate it in ATK (making the bridge something like the DBUS backend)? Integrate it in GTK+? Forget ATK, and let GTK+ talk directly with the accessibility tools using GDBUS (an option that I feel is too drastic or non practical, but still in the mind of Benjamin)? Now is the time to debate it, in order to have something decided by 3.4, which we can start to testing properly at the beginning of the 3.6 cycle. Interesting times these days.

As in the previous hackfest, other conclusion is that there is a lot of work to do, but not a lot of people to do it. And this was reflected by the amount of people in attendance (some photos here). Anyway, although we were not a lot of people, we had a lot of different backgrounds represented at the hackfest. People from GTK+, ATK, AT-SPI2, WebkitGTK, Mozilla, assistive tools and QT. For example, in several discussions Frederik Gladhorn explained how the qt-bridge implements certain features, in some cases in an different way than how atk-bridge does the same thing, as Mike noted recently on his blog.

Finally, I want to thank everyone who came to this hackfest, as they made this hackfest possible. I also want to thank Igalia, GNOME Foundation and Mozilla Foundation for their sponsorship.