Sending emails in Harmattan with Qt/QML

In the context of a personal ad-hoc app (I will come to that later) that I wrote for my Nokia N9, I needed to send an email to a specific person with an attachment. After the first research at Harmattan APIs you come to QMessageService.

The first thing I did was writing Mixed QML/Qt object that I could instantiate from the QML code so that I could do something like:

Message {
    id: message
    from: "my@Address"
    to: [ "destination@Address" ]
    subject: "This is not Spam for sure!"
    body: "Trolled! Enlarge...!"
    attachments: [ "/a/path/to/an/attachment" ]
}

Button {
    onClicked: message.compose()
//    onClicked: message.send()
}

There we have an object with two send and compose methods, three string properties representing the from, subject and body and two string list properties representing the to and attachments (we leave the CC and BCC as an exercise 😉 ). As I already explained how to create the objects with the properties in previous posts, I’ll go directly to the compose and send methods.

const QMessage Message::createMessage()
{
    QMessage message;

    message.setFrom(QMessageAddress(QMessageAddress::Email, m_from));
    QMessageAddressList toList;
    foreach (const QString &item, m_to) {
        toList.append(QMessageAddress(QMessageAddress::Email, item));
    }
    message.setTo(toList);
    message.setSubject(m_subject);
    message.setBody(m_body);
    message.appendAttachments(m_attachments);

    return message;
}

bool Message::send()
{
    QMessage message = createMessage();
    service.send(message);

    return true;
}

bool Message::compose()
{
    QMessage message = createMessage();
    service.compose(message);

    return true;
}

Easy, right?

Dancing Troll

If you want to be completely trolled stop reading here. If not, please continue.

That does not work, at least I could not make it work. I might be missing something, but I could not. I hooked to the signal and saw the state changes, that were going from sending to Ok and no email composer was showing up. No error was being returned either.

Finally, I got an easier solution by using Qt.openUrlExternally and mailto: directly from QML. You can use obscure parameters to compose the message the way you want. Example:

Button {
    onClicked: Qt.openUrlExternally("mailto:destionation@Address" +
                                    "?subject=Subject" + 
                                    "&attach=/path/to/the/attachment" +
                                    "&body=Body")
}

That does the job and it is less complicated than creating a whole class. I leave the whole message class code in a pastebin in case somebody wants to try it or develop something from there in a platform where the QMessageService really works.

Ad-hoc program

My wife tracks all our finances using Home Bank and I needed an easy way to write down the cash expenses. I decided to code a simple program easying me to manage expenses by adding them to a list (and deleting them) and export them as CSV in order to be sent through email when she requested them. As the CSV and the expenses type were so ad-hoc, I decided not to complicate it and cable them in the code and database.

I don’t think it can be interesting to anybody, but I might end up uploading it to some git repo so that it can be maintained by someone else or even forked.

Painting video with GStreamer and Qt/QML or Gtk+ with overlay

As part of my work at Igalia I had to work with video and GStreamer for some years. I always used Gtk+ for that so when I needed to do things with Qt and QML, things were different. In my projects I always used pure GStreamer code instead of the Qt bindings for GStreamer because at the moment those bindings were not ready or reliable.

I know two ways of painting video:

  • Overlay way, with a window id and so on
  • Texture streaming

I might write later about texture streaming, but I will focus now on overlay.

Painting

The first way means that you need from your graphical toolkit a window id. That window id is asked by the video sink element in a very special moment and you need to provide it in that moment if you have not provided it before. For example, if you are using playbin2 and you already know the sink you want to use, just instantiate your sink and set the window id at that moment with gst_x_overlay_set_window_handle and set the sink to the playbin2 element by setting the video-sink property.

If you are not using playbin2 and for example you are using GStreamer Editing Services, you cannot use a property because currently there is no one and need to use a more complicated method. I already reported the bug with its patches and hope that they apply them as soon as possible to improve compatibility with playbin2 because the way it is now is a bit inconsistent with the rest of GStreamer code base.

Both Qt and Gtk have now client side windows, which means that your program window has only one X window and it is the toolkit that decides which widget is receiving the events. The main consequence is that if we just set the window id, GStreamer will use the whole window and will paint the video over the rest of our widgets (it does not matter if QML/Qt or Gtk+) and you’ll get very ugly effects. To solve that, you need to set the render rectangle, which are the coordinates (relative to the X whole X window) where you want to paint your video. You need to do that just after setting the window id with gst_x_overlay_set_render_rectangle.

If you do not set your window handle and your render rectangle before the pipeline begins to move, it will ask you about that with the prepare-xwindow-id GstMessage, but this message can happen inside the GStreamer threads and it cannot wait until the main loop runs, it needs the information at that very moment, so you need to connect to the synchronous bus handle. GStreamer has a good example at the GstXOverlay documentation about how to do that. To use the callback in C++, you need to declare a static method and pass this as user data parameter, then you can behave almost as having a normal object method. This is the most common solution used in the GNOME world and fits perfectly with the Qt framework too.

The code to get the window id and render rectangle in Gtk+ would be something like:

GdkWindow *gdk_window;
gdk_window = gtk_widget_get_window(your_widget);
/* as sink you can use GST_MESSAGE_SRC() if you are waiting
    for the prepare-xwindow-id message */
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(your_sink),
                             GDK_WINDOW_XID(gdk_window));
/* do your maths about your coordinates */
gst_x_overlay_set_window_handle(GST_X_OVERLAY(sink),
                                x, y, width, height);

In Qt, if you are using common widgets, you could use something like:

WId winId = QApplication::activeWindow()->effectiveWinId();
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(your_sink),
                             winId);
/* do your maths about your coordinates */
gst_x_overlay_set_window_handle(GST_X_OVERLAY(sink),
                                x, y, width, height);

If you are using a QGraphicsScene you would do something like:

/* to get the view you could do something like this
    (if you have only one or will to mess things up):
QGraphicsView *view = your_scene.views[0];
*/
gst_x_overlay_set_xwindow_id(GST_X_OVERLAY(your_sink),
                             view->viewport()->effectiveWinId());
/* do your maths about your coordinates */
gst_x_overlay_set_window_handle(GST_X_OVERLAY(sink),
                                x, y, width, height);

If you are using QML, you would have a very similar approach to the last snippet, because as you should have a QDeclarativeItem, it has a scene() that you can use, to have something like QGraphicsView *view = scene().views[0]; (of course, assuming that you have only one view, which is the most common case).

Overlaying stuff

Some times it is nice do put your controls on top of the video by covering part of the image. It would be like having the video as the background of a canvas where you draw some other widgets. Some GStreamer elements give you the possibility of doing a trick to do this, which is using a colorkey for your background and painting whatever you want on top of that as long as it does not include that colorkey. Some elements like xvimagesink or omapxvsink (used in the Nokia N9 and N950) have the colorkey property that you can read and set. If you are not planning to overlay anything, you can forget about this, but if you do, you need set a color key to the sink and use that color to paint the background of your widget and a good moment is also when setting the window handle:

g_object_set(sink, "autopaint-colorkey", FALSE,
             "colorkey", 0x080810, NULL);

Why do I unset the colorkey autopainting? Because I do not want GStreamer to mess my widget painting.

And more important: Why did I use 0x080810? Because it is a dark color, close to black, but it is not black. Pure black can be dangerous as it is commonly used in themes when painting widgets so you would be getting ugly artifacts. Some people recommend magenta (0xFF00FF) as it is supposedly a color that does not exist in nature (citation needed). I would not do it for several reasons:

  • You will need to synchronize your painting very well to avoid seeing the colorkey
  • If you respect aspect ratio you will see it for sure, because you (or the sink if it is automatic) paint the backgound and the sink draws the image by leaving some empty space.
  • It does not behave well with blendings, as you blend from your widget color to the background, which is the colorkey

Advice: do not mess with colorkey and omapxvsink. Though it is supposed to be writable, it is not and it always uses 0x080810.

Aspect ratio

There are two kind of people:

  • The ones that want to use all the pixels of their monitor/TVs and like damaging their brain with distorted images.
  • The ones that like to see a correctly dimensioned image with some bars giving you a better impression of what was recorded.

As you can guess I belong to the second group.

There are some sinks that do that automatically for you by setting the force-aspect-ratio property, like ximagesink and xvimagesink but there are other that does not and omapxvsink is an example. It is not a big problem but forces you to work a bit more when you select the render rectangle. For that you need to know the video size, which you cannot know until the pipeline is running, which forces to to hook to the GST_MESSAGE_ASYNC_DONE, or in the case of playbin2, you already have the video size when getting the prepare-xwindow-id message. An example to get the video size would be:

GstPad *pad;
GstCaps *caps;
GstStructure *structure;
int width, height;

pad = GST_BASE_SINK_PAD(sink);
caps = GST_PAD_CAPS(pad);
g_return_if_fail(caps && gst_caps_is_fixed(caps));

structure = gst_caps_get_structure(caps, 0);
gst_structure_get_int(structure, "width", &width);
gst_structure_get_int(structure, "height", &height);

/* some videos define a pixel aspect ratio, meaning that the
   video pixel could be like 2x1 copared to a squared pixed
   and we need to correct this */
if (gst_structure_has_field(structure, "pixel-aspect-ratio")) {
    int par_n, par_d;
    gst_structure_get_fraction(structure, "pixel-aspect-ratio",
                               &par_n, &par_d);
    width = width * par_n / par_d;
}

/* trick: some sinks perform better with multiple of 2 */
width &= ~1;
height &= ~1;

Aura

As my colleague Víctor at Igalia has said before in his post, Aura was released to the Nokia Store. Miguel, Víctor and I are quite happy with the result achieved with this app, which intention was to be kind of a port of the Cheese application of the GNOME platform to be used in the N9 or N950 Nokia phones.

The apps allows you to use both cameras (front and principal) to record videos, applying a lot of funny effects (a subset of the GNOME Video Effects) and changing them during the recording. Being Nokia a Finnish company, we decided to name the app after a Finnish Cheese to both honor the GNOME Cheese application and Finland 😉

You can download the app from the Nokia Store where we already got more than 6000 downloads and 100 reviews with a quite good average rating.

You have an example recorded by me with my own phone using the Historical effect and uploaded to Youtube:

And you have even already other videos uploaded to Youtube talking about how Aura works. This one is from a brazilian guy (obrigado!) for FaixaMobi and shows more effects:

Of course, being it free sofware you can also compile it yourself with the code at GitHub and do not be afraid of contributing! The technologies we used were the camerabin element of GStreamer and Qt/QML for the interface where we have the following components:

Aura components UML diagram

  • Main view (aura.qml) with the main interface
  • Controller, which is a mixed QML/C++ object allowing to control the pipeline.
  • Pipeline is a C++ object used by the controller to encapsulate the GStreamer code.
  • PostCapture is also a mixed QML/C++ object that opens the gallery application to show the recorded video and gives you the oportunity of sharing it, deleting it and so on. It uses a C++ controller loaded as a singleton to the context to do some stuff that can only be done in C++. Of course, you can open Gallery yourself and the videos will show up there.
  • EffectManager is a C++ class to load and manage the Effects, which is another C++ class defining how the effect must be applied.
  • Effects (Effects.qml) is a QML component to show the different effects, both software and hardware that Aura can apply. It uses the EffectManager (through the context) to load them and the Controller to apply them.
  • About view (AboutView.qml) is a rework of something done by my colleage Simón Pena and adapted to be used in Aura (Kudos!). It also uses a small AboutViewController to open a Nokia Store URL with the application instead of the browser.
  • ResourceManager is a C++ class used by the Controller to request the proper permissions to record the video.

Invoking Meego 1.2 Harmattan Gallery

As part of my work at Igalia I am writing an app to record some videos for the Nokia N9. I wanted to get them shown in the gallery so I tried the DBUS approach, that is something similar to what is explained here. The problem is the same they faced, meaning that I got gallery in foreground when it was not previously running, but not otherwise.

The solution to that was using libcontentaction. In order to get this working, first you need to get the vendor name in the tags of your pictures or videos, otherwise Tracker will not index them correctly and this solution can be useless.

With the following solution Gallery will be brought to foreground and show the desired file. Code would be something like:

using ContentAction::Action;

[...]

{
    Action action =
        Action::defaultActionForFile(uri,
            "x-maemo-nepomuk/device-captured-video");
    if (action.isValid()) {
        qDebug() < < Q_FUNC_INFO << "chosen action:" << action.name();
        action.triggerAndWait();
    } else {
        qWarning() << "could not file action for" << m_file;
    }
}

The mime type is one of the defined in the galleryserviceaction.desktop that you can find in the device. For images, you can also check the mime types in that file and maybe you do not need to specify it, but I have not tried this and let it to you. Please, comment me your findings.

Mixed QML/C++ objects reloaded

Some days ago I was writing about how to have mixed QML/C++ objects in a QML application for Igalia, but I faced a problem with the code that I had written. And the problem was that I needed to receive some events and from the QML elements I was loading. Retaking that code:

#define QML_PATH "/path/to/the/qml/files/"

MyObject::MyObject(QDeclarativeItem *parent = 0) :
    QDeclarativeItem(parent)
{
    QDeclarativeEngine engine;
    QDeclarativeComponent component(&engine,
        QUrl::fromLocalFile(QML_PATH "MyObject.qml"));
    QDeclarativeItem *rect =
        dynamic_cast(component.create());
    rect->setParentItem(this);
}

and being MyObject.qml something like:

Button {
    id: "myButton"
    text: "Do something cool"
}

The natural way of connecting that button to some slot you are writing would be something like:

MyObject::MyObject(QDeclarativeItem *parent = 0) :
    QDeclarativeItem(parent)
{
    QDeclarativeEngine engine;
    QDeclarativeComponent component(&engine,
        QUrl::fromLocalFile(QML_PATH "MyObject.qml"));
    QDeclarativeItem *item =
        dynamic_cast(component.create());
    item->setParentItem(this);
    connect(item, SIGNAL(clicked()), this, SLOT(doCoolStuff()));
}

Even if you do this, you are not getting any event in the QML button you declare. It behaves as it were a label or whatever, but the key is the engine. The engine must live while you need the events of the QML component. And the easiest way of getting that done is adding the engine as a private class attribute:

#ifndef MYOBJECT_H
#define MYOBJECT_H

#include 
#include 
#include 

class PostCapture : public QDeclarativeItem
{
    Q_OBJECT

// ...

 private slots:
    void doCoolStuff();

 private:
    QDeclarativeEngine m_engine;
};
#endif

and of course removing the engine stack declaration from the constructor:

MyObject::MyObject(QDeclarativeItem *parent = 0) :
    QDeclarativeItem(parent)
{
    QDeclarativeComponent component(&m_engine,
        QUrl::fromLocalFile(QML_PATH "MyObject.qml"));
    QDeclarativeItem *item =
        dynamic_cast(component.create());
    item->setParentItem(this);
    connect(item, SIGNAL(clicked()), this, SLOT(doCoolStuff()));
}

Voilà.

Of course, if you need to connect to any element that is not the root element, you can always forward the signals and properties from the QML root object or just use the QObject::findChild method to access the right component.

Mixed QML/C++ objects

One of the good things of QML is that you can have both C++ and QML code and interact between them. For example, from C++, you can access the QML tree and invoke methods, change properties, etc.

The other way around, you can also define C++ objects and interact with them from QML. Here you have two ways of doing it:

  • Instantiating from C++ and adding the objects to the QML context, meaning that you can invoke the public slots and access the properties.
  • Registering the QML type with qmlRegisterType and then instantiating it from QML.

At the Qt documentation you can find examples about how to implement the different approaches so I won’t talk too much about them and I’ll focus in a special case of the last approach.

Let’s see a QML code like this:

Item {
    id: myObject
    function doCoolStuff() {
        // doing really cool stuff here
    }
    Rectangle {
        anchors.fill: parent
        color: "red"
    }
}

Button {
    text: "Do cool stuff!"
    onClicked: myObject.doCoolStuff()
}

Imagine now that painting that rectangle is something that must be really done by your object, because it is needed and inherent to it, and so is the function. If the code is written only in QML, the answer is obvious, just move the whole Item code to a .qml file and leave the main code like this:

MyObject {
    id: myObject
}

Button {
    text: "Do cool stuff!"
    onClicked: myObject.doCoolStuff()
}

Let’s suppose that you need some C++ code in that object for whatever reason (you want to use some GNOME library, for instance). In this case you need to write it in C++ to define the public slots in that language. Our first step would be something like this:

#include <QtCore>
#include <QtDeclarative>

 class MyObject : public QDeclarativeItem
 {
     Q_OBJECT

// Define some Q_PROPERTY here with its methods

 public slots:
     void doCoolStuff(void);

// We could even define some signals
 };

For the slot, it does not matter if you declare it as slot or with Q_INVOKABLE because QML will see it both ways. Of course, don’t forget to write the cpp file with the implementation for the slot 😉 . And you also need to declare the QML type, for example in your main.cpp file:

#include <QtDeclarative>
#include "myobject.h"

int main(int argc, char *argv[])
{
    // ...
    qmlRegisterType("myapp", 1, 0, "myobject");
    //...
}

So far it is not different from what you can read at Defining new QML elements section of the Qt doc.

But what about painting the rectangle? We can do it with Qt code, but as we are lazy bastards and want to use QML for that, we have two options. One of them is forgetting about it in the C++ code and having something ugly like:

import myapp.myobject 1.0

MyObject {
    Rectangle {
        anchors.fill: parent
        color: "red"
    }
}

But come on, that rectangle is part of the object and nobody should write the Rectangle themselves. So we can take the better approach of moving the Rectangle to a different MyObject.qml file and loading it in the constructor of the C++ class:

#define QML_PATH "/path/to/the/qml/files/"

MyObject::MyObject(QDeclarativeItem *parent = 0) : QDeclarativeItem(parent)
{
    QDeclarativeEngine engine;
    QDeclarativeComponent component(&engine, QUrl::fromLocalFile(QML_PATH "MyObject.qml"));
    QDeclarativeItem *rect = dynamic_cast<QDeclarativeItem *>(component.create());
    rect->setParentItem(this);
}

Voilà.