Hildon Application Manager goes public

Almost since my arrival to Igalia I started to work in the Hildon Application Manager or H-A-M for friends. The project was developed using SVN in garage, but after the GUADEC in Turkey, we move it to GIT, within an internal server.

But after an unexpected and bold movement of Marius Vollmer, the latest development version of H-A-M, for Fremantle more specifically, was pushed into the wild. Thanks mvo!

H-A-M was finally sync with a repository in Gitorious, making available its source code for everyone. So, everyone is invited to submit your patches! 😀

The big patch theory

It is well known that the free/open source software major asset is the sense of community surrounding each project. That idea is the constant in almost all the promotional videos which participate in the Linux Foundation contest We’re Linux.

A community driven project, exposed to the masses by Eric Raymond in his paper The Cathedral and the Bazaar, has been gaining moment, turning the software development, rather than a isolated, almost monastic, craft work, into a collaborative, openly discussed, engineering task.

All those ideas sound exciting and appealing, but from the trenches, how is it done? How does a bunch of hackers collaborate all together to construct a coherent and useful piece of software?

We could be tempted to think about complex, full solution, software such as forges, distributed revision control software, etc. But I won’t go there. That’s buzzword. From my point of view, the essence of any collaborative software project is the patch.

According the software engineering, an important activity, in the software development process, is the peer reviewing. Even ESR gave a name to it: the Linus’s Law. But the software peer review is not an task done once or periodically, it would be an almost impossible duty, or at least impractical, because it’d imply read and analyse thousands of lines of code. Instead of this, the peer review it’s a continuum, it’s a process that must be carried on during all the code writing.

So far we stated two purposes for a patch file: collaborative software construction and peer review. For the first purpose the activities implied are, from the developer point of view, modifying the code, extract the diff file and hand it out; from the integrator point of view, the application of the patch and keep a register of it. It’s relatively simple process, and it can be automatised in some degree with the help of a SCM software. But there’s hidden trick: the patch must be small and specific enough to survive several changes in the base code, and could be applied cleanly even if the surrounding code had changed since it was created.

Nevertheless, for the second purpose, the process from the developer point of view, is more complex; the crafting of a patch for peer review is not a trivial task, quite the opposite, good patches are fruit of experience.

A good patch has set of subjective and objectives values that the reviewers and the integrator will take in count at the moment to commit it in the official repository. A patch must be small enough to be easily understandable, attack a single issue, self-contained (in must have all it needs to solve an issue), well documented, complete, honour the project code-style, and so on.

A perfect patch must transmit the needed confidence to the maintainer, with a simple glance, to apply it immediately. And only the experience can give this skills. There’s not a Patch 101 lecture, but maybe is a interesting idea.


Last weekend some igalians and I went to Brussels to attend the FOSDEM’09. We arrived the Friday just in time for the FOSDEM Beer Event (amazing coincidence!). In the Delirium Café I got my first epiphany about the True Spirit of FOSDEM: it is not the talks and the meetings, it is the beer. So, after that night, I could say “mission accomplished”.

Nevertheless I went further and also assisted to a couple talks:

* The People Framework: it is about having a unified method to access to “contacts” backends (google, ldap, etc.) Go Vala!
* The Hynerian Empire: It was about Rygel, a UPnP media server. Go Vala!
* Bringing geolocation into GNOME: I slept this one.
* Tracker: Philip tried to expose tracker as the ultimate object locater.
* Xfce 4.6 and then?: It was a “What’s new” in Xfce.
* Maemo on BeagleBoard: Nokias employees says that their software also runs in other hardware.
* WebKit on ebook readers: A WebKit implementation for a specific embedded device.
* Ext4: What is and what is new in Ext4: a featured Ext3.

But now, when all the craziness of Brussels had gone, reviewing the whole schedule, I realized that I should went to other talks. These are my actual chooses… too late…
* Wt, a C++ web toolkit, for rich web interfaces to embedded systems
* Reverse Engineering of Proprietary Protocols, Tools and Techniques
* Building Embedded Linux Systems with PTXdist
* A talk on FLOSSMetrics
* CMake – what can it do for your project
* Syslinux and the dynamic x86 boot process
* Emdebian 1.0 release – small & super small Debian
* Mozilla Headless back-end

By the way, I just loved Brussels.

password management

Be on-line means have user accounts in a lot of services, such as email, social web sites, blogs, etc.. And have multiple user accounts implies have to remember user names and passwords.

I know people that only have one user name and password and repeat the same for all their accounts. This approach may simplify the need to memorise a lot of different pair of words. But this method is not reliable at all: 1) if your password is compromised, you’ll have to change it in all your accounts, and 2) you may forget an account if you don’t keep track of all the accounts you sign up.

Another big issue is that most of the people have passwords easy to remember, and those passwords usually easy to crack.

In other words, we have two problems: 1) keep track of all our user accounts (resource, user name and password), and 2) the password must be not guessable.

For the second problem you may follow hints and craft each one. But I am lazy guy and computers execute algorithms better than me. So I installed an automated password generator (apg) and let the program offer me a set of possible passwords, choosing the most appealing one.

$ apg -M NCL -c cl_seed -t -x 8
Awnowov6 (Awn-ow-ov-SIX)
Biuj7qua (Bi-uj-SEVEN-qua)
RyecGod9 (Ryec-God-NINE)
Ojonrag1 (Oj-on-rag-ONE)
9KnecOng (NINE-Knec-Ong)
ClagHog0 (Clag-Hog-ZERO)

Neat, don’t you think?

Now, the straightforward solution for the first problem is write down, in a plain text file, the list of resources, user names and password, of every user account you have. This file can be consulted if you don’t remember the account data.

As a personal choice, I use, in Emacs, the org mode to organise the user data, because its table editor is just beautiful. Furthermore, I have several types of outlined user accounts (web sites, email servers, WEP keys, etc.), what it is also handled by org mode.

* web
| site                   | user         | password   |
| https://sitio.guay.com | mi_nick_guay | Ojonrag1   |
| ...

* email servers
| server         | user          | password         |
| mi_empresa.com | mi_nick_serio | ClagHog0         |
| ...

* wep
| essid          | password  |
| essid_del_piso | Awnowov6  |
| ...

But now we have a problem: have a plain text file with all your passwords is more insecure than just have one shared among all your user accounts. If somebody gain access to this file, (s)he will own you.

The solution can’t be more simpler: encrypt the file! Well, yes, you’ll have to remember one password, but only one! In order to encrypt GPG is the way to go. GPG not only support asymmetric encryption, but also symmetric, which may be handy if you don’t like or you are not used to use private/public keys. Nevertheless, is worthy learn how to interact with the asymmetric encryption.

Well, if you use Emacs, you have the EasyPG mode, which will easy the GPG interaction, avoiding you to run the gpg command each time you want to read or save your file, the mode will detect if the file is encrypted and it will ask you for the pass phrase to decrypt it transparently for the user.

Once you have encrypted your password file, you can put it in your home page for backup and roaming purposes.

Neat, don’t you think?

This post is heavily inspired in Keeping your secrets secret.

no country for boy’s compilers

During the first revision by the mythical committee of ALGOL, called ALGOL60, Donald Knuth proposed a mean for evaluating implementations of the language: the man or boy test.

This test involves a heavy use of closures and evaluating functions as first-class citizen in the programming language. If your programming language, either its design and implementation, can solve the test, you’re working on a mature compiler.

Coroutines, closures and continuations

I ought start with a disclaimer: I don’t have experience with functional programming. I never studied it. Nevertheless recently I have had some contact with some of its concepts, specially with the idea of continuations and closures. From these contacts came up a question, an hypothesis: by definition, continuations ⊆ closures ⊆ coroutines?.

The first step should be find their formal definitions and establish their relationship as subsets. But find and understand formal definitions is not a piece of cake, at least for me, so I will try to expose the most clear definitions, in simple words, which I found in
the net:


A coroutine is a generalization of a subroutine which can be paused at some point and returns a value. When the coroutine is called again, it is resumed right from the point it was paused and its environment remains intact.

In other words, a coroutine is a subroutine which has multiple entry points and multiple return points. The next entry point is where the last return point occurred or at the beginning of the coroutine if it reached its end previously. Meanwhile a subroutine only has one entry point and one or more return points.


In this example, the pause and return point is controlled by the token yield.

coroutine foo {
yield 1;
yield 2;
yield 3;

print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";

This code will print:



A closure is a block of code (can be anonymous or a named subroutine) which can contains variables defined out of its lexical scope (free or bound variables). When that block of code is evaluated, all the external variables and arguments are used to resolve its free variables.


function foo (mylist) {
threshold = 150
return mylist.select ({ myitem, myitem.value > threshold })

In this example, the closure is the block code of { e, e.salary > threshold } which is evaluated for every item in the list (the select method traverse the list executing the passed closure for each element), and the free variable of threshold is resolved in
the lexical scope of the foo subroutine.


A continuation is a subroutine which represents the rest of the program.

In order to visualize this, you must break with the concept of the return of a subroutine, the subroutines don’t return values, they continue the execution of the program with another subroutine. The continuation, obviously, must receive the whole state of the program.

In other words, a continuation is a glorified “goto” sentence, where the execution pointer is transfered from a subroutine to another subroutine. The concept of the function stack and their dependency relationships are removed.

function foo (value, continuation) {
continuation (value + 1)

In the previous example, the continuation is passed to the foo subroutine as an argument, an contains, besides the next subroutine to execute, the whole state of the program, and receives the an argument which is processed by foo.


Each concept, although extends or modifies the wide-used concept of a lexical scoped subroutine, is independent among them, but not mutually exclusive: in terms of implementation you may have a coroutine which is a closure, a continuation which is a closure, a coroutine which is a continuation or vice versa.

So, my hypothesis is incorrect, we must rephrase it as, by definition coroutine ∩ closure ∩ continuation = ∅

But, by implementation, ∃ coroutine ∪ closure ∪ continuation


what the heck is: a coroutine

Wikipedia Coroutine


A Definition of Closures

Wikipedia Closure

Fake threads

Continuations made simple and illustrated

Wikipedia Continuations

async gio test

In order to understand how to use the asynchronous API of gio, I have cooked a small test. It was a little hard to figure out the use of the API, mostly because I could not find implementations using Google’s codesearch, neither doing grand-greps ™ on my jhbuild’s directory of checkouts.

I hope this little test be useful for those who are trying to use gio in their async operations.

Programming tips: Google’s codesearch

Let’s face it: every programmer loves the “copy&paste” when he is prototyping a solution, learning how to use a new library or just needs a quick fix, and why not?

I mean, yes, it is important understand what we are doing, the implications of every single line of code, keep the elegance, coherency and the simplicity, but those characteristics don’t come just from inspiration neither reading and understanding the theory, nor from a big flame war at email or IRC. We need examples. We learn, as good ape descendants, from imitation.

We are learning/fixing/prototyping and we need fast feedback in order to keep us happy and motivated. We need perceive the progress of our iterative work. The “copy&paste” is great for those purposes. Then, the discipline must do its work sustaining the new acquired knowledge, but that is another history.

But even in the “copy&paste” we must be smart and responsible with ourselves: we must seek the best examples to imitate, we must look at the alpha male, to borrow the brightest resources available. “Copy&Paste” the code of a lousy programmer and you will be another one; “copy&paste” the the code of a great programmer and maybe, someday, you and I will be one.

A great resource to search, easily and fast, great sources of code, since a couple years to now, is Google’s CodeSearch. It does searches along a great number of successful and recognized open projects sources (released tarballs mostly).

When I am working on a programming task which implies new code, my usual work-flow is to visualize the solution as a sequential execution of macro operations which will be defined to to a more fine grained at each iteration. At each iteration usually I found situations when I want to have either a quick solution, or I want to find how others have solved it, or just how to use a specific function/method call, so I go the CodeSearch site and type the used programming language, the related API, et voilà, several possible solutions are shown.

Further more, if I need know how to achieve something with autoconf-fu, script-fu, or even system-configuration-fu, I can do searches with specific file names as Makefile.am or configure.ac.

My two cents in programming tips.

summer hack

I have wrote a Rhythmbox plugin for playing mp3 streams from goear.com. It searchs, requests several pages on demand, fill the metadata when the stream begins and sorts the search results.

What is missing is the capability to make playlists with selected streams.

You can find the patch in Rhythmbox’s bugzilla. Just patch the code, add the goear.png in the puglins/goear directory, build et voila. The patch was done with the current subversion HEAD, but also is applied cleanly in the Hardy Ubuntu’s source package.

By the way, you can find a Hardy Ubuntu package for 32bits with the patch here (the metadata filling doesn’t work here :()