password management

Be on-line means have user accounts in a lot of services, such as email, social web sites, blogs, etc.. And have multiple user accounts implies have to remember user names and passwords.

I know people that only have one user name and password and repeat the same for all their accounts. This approach may simplify the need to memorise a lot of different pair of words. But this method is not reliable at all: 1) if your password is compromised, you’ll have to change it in all your accounts, and 2) you may forget an account if you don’t keep track of all the accounts you sign up.

Another big issue is that most of the people have passwords easy to remember, and those passwords usually easy to crack.

In other words, we have two problems: 1) keep track of all our user accounts (resource, user name and password), and 2) the password must be not guessable.

For the second problem you may follow hints and craft each one. But I am lazy guy and computers execute algorithms better than me. So I installed an automated password generator (apg) and let the program offer me a set of possible passwords, choosing the most appealing one.

$ apg -M NCL -c cl_seed -t -x 8
Awnowov6 (Awn-ow-ov-SIX)
Biuj7qua (Bi-uj-SEVEN-qua)
RyecGod9 (Ryec-God-NINE)
Ojonrag1 (Oj-on-rag-ONE)
9KnecOng (NINE-Knec-Ong)
ClagHog0 (Clag-Hog-ZERO)

Neat, don’t you think?

Now, the straightforward solution for the first problem is write down, in a plain text file, the list of resources, user names and password, of every user account you have. This file can be consulted if you don’t remember the account data.

As a personal choice, I use, in Emacs, the org mode to organise the user data, because its table editor is just beautiful. Furthermore, I have several types of outlined user accounts (web sites, email servers, WEP keys, etc.), what it is also handled by org mode.

* web
| site                   | user         | password   |
|------------------------+--------------+------------|
| https://sitio.guay.com | mi_nick_guay | Ojonrag1   |
| ...

* email servers
| server         | user          | password         |
|----------------+---------------+------------------|
| mi_empresa.com | mi_nick_serio | ClagHog0         |
| ...

* wep
| essid          | password  |
|----------------+-----------|
| essid_del_piso | Awnowov6  |
| ...

But now we have a problem: have a plain text file with all your passwords is more insecure than just have one shared among all your user accounts. If somebody gain access to this file, (s)he will own you.

The solution can’t be more simpler: encrypt the file! Well, yes, you’ll have to remember one password, but only one! In order to encrypt GPG is the way to go. GPG not only support asymmetric encryption, but also symmetric, which may be handy if you don’t like or you are not used to use private/public keys. Nevertheless, is worthy learn how to interact with the asymmetric encryption.

Well, if you use Emacs, you have the EasyPG mode, which will easy the GPG interaction, avoiding you to run the gpg command each time you want to read or save your file, the mode will detect if the file is encrypted and it will ask you for the pass phrase to decrypt it transparently for the user.

Once you have encrypted your password file, you can put it in your home page for backup and roaming purposes.

Neat, don’t you think?

This post is heavily inspired in Keeping your secrets secret.

no country for boy’s compilers

During the first revision by the mythical committee of ALGOL, called ALGOL60, Donald Knuth proposed a mean for evaluating implementations of the language: the man or boy test.

This test involves a heavy use of closures and evaluating functions as first-class citizen in the programming language. If your programming language, either its design and implementation, can solve the test, you’re working on a mature compiler.

Coroutines, closures and continuations

I ought start with a disclaimer: I don’t have experience with functional programming. I never studied it. Nevertheless recently I have had some contact with some of its concepts, specially with the idea of continuations and closures. From these contacts came up a question, an hypothesis: by definition, continuations ⊆ closures ⊆ coroutines?.

The first step should be find their formal definitions and establish their relationship as subsets. But find and understand formal definitions is not a piece of cake, at least for me, so I will try to expose the most clear definitions, in simple words, which I found in
the net:

coroutine

A coroutine is a generalization of a subroutine which can be paused at some point and returns a value. When the coroutine is called again, it is resumed right from the point it was paused and its environment remains intact.

In other words, a coroutine is a subroutine which has multiple entry points and multiple return points. The next entry point is where the last return point occurred or at the beginning of the coroutine if it reached its end previously. Meanwhile a subroutine only has one entry point and one or more return points.

Example:

In this example, the pause and return point is controlled by the token yield.

coroutine foo {
yield 1;
yield 2;
yield 3;
}

print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";
print foo (), "n";

This code will print:

1
2
3
1
2
3

closure

A closure is a block of code (can be anonymous or a named subroutine) which can contains variables defined out of its lexical scope (free or bound variables). When that block of code is evaluated, all the external variables and arguments are used to resolve its free variables.

Example:

function foo (mylist) {
threshold = 150
return mylist.select ({ myitem, myitem.value > threshold })
}

In this example, the closure is the block code of { e, e.salary > threshold } which is evaluated for every item in the list (the select method traverse the list executing the passed closure for each element), and the free variable of threshold is resolved in
the lexical scope of the foo subroutine.

continuation

A continuation is a subroutine which represents the rest of the program.

In order to visualize this, you must break with the concept of the return of a subroutine, the subroutines don’t return values, they continue the execution of the program with another subroutine. The continuation, obviously, must receive the whole state of the program.

In other words, a continuation is a glorified “goto” sentence, where the execution pointer is transfered from a subroutine to another subroutine. The concept of the function stack and their dependency relationships are removed.

function foo (value, continuation) {
continuation (value + 1)
}

In the previous example, the continuation is passed to the foo subroutine as an argument, an contains, besides the next subroutine to execute, the whole state of the program, and receives the an argument which is processed by foo.

conclusion

Each concept, although extends or modifies the wide-used concept of a lexical scoped subroutine, is independent among them, but not mutually exclusive: in terms of implementation you may have a coroutine which is a closure, a continuation which is a closure, a coroutine which is a continuation or vice versa.

So, my hypothesis is incorrect, we must rephrase it as, by definition coroutine ∩ closure ∩ continuation = ∅

But, by implementation, ∃ coroutine ∪ closure ∪ continuation

references

what the heck is: a coroutine
http://www.sidhe.org/~dan/blog/archives/000178.html

Wikipedia Coroutine
http://en.wikipedia.org/wiki/Coroutines

Closure
http://martinfowler.com/bliki/Closure.html

A Definition of Closures
http://gafter.blogspot.com/2007/01/definition-of-closures.html

Wikipedia Closure
http://en.wikipedia.org/wiki/Closure_(computer_science)

Fake threads
http://mail.python.org/pipermail/python-dev/1999-July/000467.html

Continuations made simple and illustrated
http://www.ps.uni-sb.de/~duchier/python/continuations.html

Wikipedia Continuations
http://en.wikipedia.org/wiki/Continuations

async gio test

In order to understand how to use the asynchronous API of gio, I have cooked a small test. It was a little hard to figure out the use of the API, mostly because I could not find implementations using Google’s codesearch, neither doing grand-greps ™ on my jhbuild’s directory of checkouts.

I hope this little test be useful for those who are trying to use gio in their async operations.

Programming tips: Google’s codesearch

Let’s face it: every programmer loves the “copy&paste” when he is prototyping a solution, learning how to use a new library or just needs a quick fix, and why not?

I mean, yes, it is important understand what we are doing, the implications of every single line of code, keep the elegance, coherency and the simplicity, but those characteristics don’t come just from inspiration neither reading and understanding the theory, nor from a big flame war at email or IRC. We need examples. We learn, as good ape descendants, from imitation.

We are learning/fixing/prototyping and we need fast feedback in order to keep us happy and motivated. We need perceive the progress of our iterative work. The “copy&paste” is great for those purposes. Then, the discipline must do its work sustaining the new acquired knowledge, but that is another history.

But even in the “copy&paste” we must be smart and responsible with ourselves: we must seek the best examples to imitate, we must look at the alpha male, to borrow the brightest resources available. “Copy&Paste” the code of a lousy programmer and you will be another one; “copy&paste” the the code of a great programmer and maybe, someday, you and I will be one.

A great resource to search, easily and fast, great sources of code, since a couple years to now, is Google’s CodeSearch. It does searches along a great number of successful and recognized open projects sources (released tarballs mostly).

When I am working on a programming task which implies new code, my usual work-flow is to visualize the solution as a sequential execution of macro operations which will be defined to to a more fine grained at each iteration. At each iteration usually I found situations when I want to have either a quick solution, or I want to find how others have solved it, or just how to use a specific function/method call, so I go the CodeSearch site and type the used programming language, the related API, et voilà, several possible solutions are shown.

Further more, if I need know how to achieve something with autoconf-fu, script-fu, or even system-configuration-fu, I can do searches with specific file names as Makefile.am or configure.ac.

My two cents in programming tips.

summer hack

I have wrote a Rhythmbox plugin for playing mp3 streams from goear.com. It searchs, requests several pages on demand, fill the metadata when the stream begins and sorts the search results.

What is missing is the capability to make playlists with selected streams.

You can find the patch in Rhythmbox’s bugzilla. Just patch the code, add the goear.png in the puglins/goear directory, build et voila. The patch was done with the current subversion HEAD, but also is applied cleanly in the Hardy Ubuntu’s source package.

By the way, you can find a Hardy Ubuntu package for 32bits with the patch here (the metadata filling doesn’t work here :()

devhelp and jhbuild

When I started to use jhbuild I resolved to not compile a whole new
Gnome desktop because my work laptop is too slow and freeze easily: I
had to reduce the dependencies at minimum and try reuse the libraries
already provided by my distro (Ubuntu by the way).

The first trouble I had was to have access to the gtk-doc API
documentation through the installed devhelp. The solution is simply,
but not obvious (at least for me):

Append in ~/.profile

XDG_DATA_DIRS="${XDG_DATA_DIRS}":/opt/jhbuild/share

export XDG_DATA_DIRS

Well, you know you’ve to change the path to your jhbuild deployment
directory

Guadec 2008

Last week I attended the Guadec 2008 in Turkey. It was my second Guadec. The first one was in Sevilla in 2002, when I was backpacking Europe. The big difference between my previous and this last guadec is that now I have a better understanding of the state of the art, so I could appreciate more the talks and proposals done, which I will try to summarize in this post.

The Gnome community faces new and not so new challenges. The framework is trying to push its limits to new frontiers besides renovate its own core.

First there is the discussion about the strategy for Gtk+. Imendio is pushing for a ABI breakage in order to polish the API, sealing the private data in the components, among other things. Nevertheless, Miguel, being the voice of several ISV, is disagree with proposed path.

In the Gtk+ road map, a great discussion has been raised about the new canvas. Seems that the favorite contender is Clutter, pushed by OpenedHand.

In other front, Mark Shuttleworth came up with the idea to integrate QT in Gnome, replacing Gtk+ in the long term.

Alp Toker presented the advances in WebKit development, and in my perception, a lot of gnomies are webkit enthusiast, because have a tighter integration with the Gtk+ widgets and better performance than Firefox currently (which is always temporal). So I expect more applications using webkit rather than embedded Mozilla.

Following the web area, there’s a big effort, basically from the RedHat guys for the online desktop, integrating the desktop with different and heterogeneous web feeds. I think this is a must for the next desktop environment. More and more applications are web integrated, and give a framework to do this and mix all that information smartly is a huge need.

All the multimedia stack is healthy managed by the guys at Collabora and Fluendo. There’s an important effort in data communication (Telepathy and Farsight) using the GStreamer framework. Meanwhile another project, Ekiga, which doesn’t use GStreamer is steady and currently the better application for VoIP in Gnome.

Finally, Miguel is pushing for a HTTP desktop applications model, using Moonlight as front end machinery, and a custom HTTP server as gluing proxy for the back end systems applications. A bold idea with not too many supporters.

As colophon we must say in the mobile area are great efforts (Nokia as main bidder) to keep sync with the mainstream development and improve the performance in both fronts.

The ideas are in the table, the discussion is on going, and the code being committed. Where do you want to work in?