<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Words from the Inside</title>
    <description>Uninteresting things from an uninteresting developer</description>
    <link>https://blogs.igalia.com/jasuarez/</link>
    <atom:link href="https://blogs.igalia.com/jasuarez/feed.xml" rel="self" type="application/rss+xml" />
    <pubDate>Sun, 04 Jan 2026 00:28:37 +0100</pubDate>
    <lastBuildDate>Sun, 04 Jan 2026 00:28:37 +0100</lastBuildDate>
    <generator>Jekyll v4.4.1</generator>
    
      <item>
        <title>Major Upgrades to the Raspberry Pi GPU Driver Stack (XDC 2025 Recap)</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://indico.freedesktop.org/event/10/&quot;&gt;XDC 2025&lt;/a&gt; happened at the end of September, beginning of October this year, in
&lt;a href=&quot;https://www.tuwien.at/tu-wien/unileben/kultur/veranstaltungsraeume&quot;&gt;Kuppelsaal&lt;/a&gt;, the historic &lt;a href=&quot;https://www.tuwien.at&quot;&gt;TU Wien&lt;/a&gt; building in Vienna. XDC, The X.Org
Developer’s Conference, is truly the premier gathering for open-source graphics
development. The atmosphere was, as always, highly collaborative and packed
with experts across the entire stack.&lt;/p&gt;

&lt;p&gt;I was thrilled to present, together with my workmate &lt;a href=&quot;https://www.igalia.com/team/ella&quot;&gt;Ella Stanforth&lt;/a&gt;, on the
progress we have made in enhancing the Raspberry Pi GPU driver stack.
Representing the broader &lt;a href=&quot;https://www.igalia.com/technology/graphics&quot;&gt;Igalia Graphics Team&lt;/a&gt; that work on this GPU, Ella and
I detailed the strides we have made in the OpenGL driver, though part of the
improvements affect also the Vulkan driver.&lt;/p&gt;

&lt;p&gt;The presentation was divided in two parts. In the first one, we talked about
the new features that we were implementing, or are under implementation, mainly
to make the driver more closely aligned with OpenGL 3.2. Key features explained
were 16-bit Normalized Format support, Robust Context support, and Seamless
cubemap implementation.&lt;/p&gt;

&lt;p&gt;Beyond these core OpenGL updates, we also highlighted other features, such as
NIR printf support, framebuffer fetch or dual source blend, which is important
for some game emulators.&lt;/p&gt;

&lt;p&gt;The second part was focused on specific work done to improve the performance.
Here, we started with different traces from the popular &lt;a href=&quot;https://gfxbench.com&quot;&gt;GFXBench&lt;/a&gt; application,
and explained the main improvements done throughout the year, with a look at
how much each of these changes improved the performance for each of the
benchmarks (or in average).&lt;/p&gt;

&lt;p&gt;At the end, for some benchmarks we nearly doubled the performance compared to
last year. I won’t explain here each of the changes done, But I encourage the
reader to watch the talk, which is already available.&lt;/p&gt;

&lt;iframe width=&quot;560&quot; height=&quot;315&quot; src=&quot;https://www.youtube.com/embed/HRts_l6BvsM?si=KINSP2mn0lPxUgDW&quot; title=&quot;Improvements to the Raspberry Pi GPU driver stack&quot; frameborder=&quot;0&quot; allow=&quot;fullscreen&quot; referrerpolicy=&quot;strict-origin-when-cross-origin&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;For those that prefer to check the slides instead of the full video, you can
view them here:&lt;/p&gt;

&lt;iframe width=&quot;100%&quot; height=&quot;515&quot; src=&quot;https://people.igalia.com/jasuarez/slides/xdc-2025/&quot; title=&quot;Improvements to the Raspberry Pi GPU driver stack&quot; referrerpolicy=&quot;strict-origin&quot; allow=&quot;fullscreen&quot;&gt;&lt;/iframe&gt;

&lt;p&gt;Outside of the technical track, the venue’s location provided some excellent
down time opportunities to have lunch at different nearby places. I need to
highlight here one that I really enjoyed: &lt;a href=&quot;https://www.anskitchen.at&quot;&gt;An’s Kitchen Karlsplatz&lt;/a&gt;. This cozy
Vietnamese street food spot quickly became one of my favourite places, and I
went there a couple of times.&lt;/p&gt;

&lt;p&gt;On the last day, I also had the opportunity to visit some of the most
recomendable sightseeings spots in Vienna. Of course, one needs more than a
half-day to do a proper visit, but at least it helps to spark an interest to
write it down to pay a full visit to the city.&lt;/p&gt;

&lt;p&gt;Meanwhile, I would like to thank all the conference organizers, as well as all
the attendees, and I look forward to see them again.&lt;/p&gt;

</description>
        <pubDate>Mon, 24 Nov 2025 00:00:00 +0100</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2025/11/24/raspberry-xdc-2025/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2025/11/24/raspberry-xdc-2025/</guid>
        
        <category>graphics</category>
        
        <category>raspberrypi</category>
        
        
        <category>Freedesktop</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Downloading personal data from web services</title>
        <description>&lt;p&gt;This post is a basic compilation on how to get your personal data from different social networks and web services. The reason to write it is for myself, as I usually make a backup of that data once or twice per year. So having all the steps in a single post is useful. I hope it is useful for others too.&lt;/p&gt;

&lt;p&gt;The services here are the ones I use. My plan is to update the post with new instructions if they change, or with new services I use (and the other way around, I’ll drop the services that I don’t use anymore).&lt;/p&gt;

&lt;!-- ## Facebook --&gt;

&lt;!-- This service provides an easy way to export all your data. --&gt;

&lt;!-- - Go to [&quot;Settings&quot;](https://www.facebook.com/settings). --&gt;
&lt;!-- - In [&quot;Your Facebook information&quot;](https://www.facebook.com/settings?tab=your_facebook_information), click on [&quot;Download a copy&quot;](https://www.facebook.com/dyi/?referrer=yfi_settings) of your Facebook data. --&gt;
&lt;!-- - You can configure which data you want to download, dates, and so on. --&gt;
&lt;!-- - Once done, click on &quot;Start My Archive&quot;. --&gt;

&lt;!-- It will take some time to have the data avaialable. When it is ready, an email is sent with the link to download it. --&gt;

&lt;h2 id=&quot;feedly&quot;&gt;Feedly&lt;/h2&gt;

&lt;p&gt;This service provides a direct way to export our data:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://feedly.com/i/account&quot;&gt;“Settings”&lt;/a&gt; and then select &lt;a href=&quot;https://feedly.com/i/account/privacy&quot;&gt;“Privacy and Personal data”&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Scroll down to “Control Your Data and Take Your Data With You”, and click on “Generate your personal data archive”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;After some minutes, a compressed file will be offered with all your data.&lt;/p&gt;

&lt;!-- ## Flickr --&gt;

&lt;!-- Flickr doesn&apos;t provide a way to export all your data. So we need to use an script instead. --&gt;

&lt;!-- I recommend [flickrmirrorer](https://github.com/markdoliner/flickrmirrorer.git). --&gt;

&lt;!-- ~~~ --&gt;
&lt;!-- $ virtualenv ~/ve --&gt;
&lt;!-- $ source ~/ve/bin/activate --&gt;
&lt;!-- $ pip install python-dateutil flickrapi --&gt;
&lt;!-- $ git clone https://github.com/markdoliner/flickrmirrorer.git --&gt;
&lt;!-- ~~~ --&gt;

&lt;!-- To perform the backup, just run --&gt;

&lt;!-- ~~~ --&gt;
&lt;!-- ~/flickrmirrorer.py &lt;output&gt; --&gt;
&lt;!-- ~~~ --&gt;

&lt;!-- As flickrmirrorer uses OAuth to authenticate, it opens a web browser where you can grant access permissions. Once you authorize it, an authorization token is displayed (9 digits). Copy it and paste in the same terminal you run flickrmirrorer, and press Enter. The backup is now performer. --&gt;

&lt;!-- ## Foursquare --&gt;

&lt;!-- - Ensure you have signed in Foursquare --&gt;
&lt;!-- - Go to https://foursquare.com/feeds --&gt;
&lt;!-- - You can export in several formats: RSS, KML, ICS or even to Google Calendar --&gt;

&lt;h2 id=&quot;github&quot;&gt;GitHub&lt;/h2&gt;

&lt;p&gt;This service also provides a way to export our data:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://github.com/settings/admin&quot;&gt;“Settings/Account”&lt;/a&gt;, and in “Export account data” section, click on “Start export”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It will take some minutes to provide the data. The link to download it will be sent through an email.&lt;/p&gt;

&lt;!-- For GitHub we will use a script, `github-backup`, that allows to backup both the repositories and also the issues or comments. --&gt;

&lt;!-- ~~~ --&gt;
&lt;!-- $ virtualenv ~/ve --&gt;
&lt;!-- $ source ~/ve/bin/activate --&gt;
&lt;!-- $ pip install github-backup --&gt;
&lt;!-- ~~~ --&gt;

&lt;!-- As I&apos;m using two-factor authentication in GitHub, I need to setup a [personal token](https://github.com/settings/tokens) to use with the script. Just generate a new one and copy it. It can be removed after backup is done. --&gt;

&lt;!-- Now, let&apos;s do the backup: --&gt;

&lt;!-- ~~~ --&gt;
&lt;!-- github-backup -u &lt;username&gt; -t &lt;token&gt;  --all -o &lt;output&gt;  &lt;user|organization&gt; --&gt;
&lt;!-- ~~~ --&gt;

&lt;!-- &lt;username&gt; and &lt;token&gt; is for authentication purposes. &lt;user|organization&gt; is the user or organization owning the repositories you want to backup. If you are backuping your own repos then it will be the same as &lt;username&gt;. But could be you have created or are part of other organizations. In this case you would use such name. --&gt;

&lt;h2 id=&quot;goodreads&quot;&gt;Goodreads&lt;/h2&gt;

&lt;p&gt;To get a copy of your Goodreads data:&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://www.goodreads.com/user/edit?ref=nav_profile_settings&quot;&gt;“Account settings”&lt;/a&gt;, and &lt;a href=&quot;https://www.goodreads.com/user/edit?tab=settings&quot;&gt;“Settings”&lt;/a&gt; tab.&lt;/li&gt;
  &lt;li&gt;In the Privacy section, click on &lt;a href=&quot;https://www.goodreads.com/dsar/user&quot;&gt;“Submit a dowload request”&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Now click on “Submit request”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Again, a link with to download the data will be submitted to your email.&lt;/p&gt;

&lt;h2 id=&quot;google&quot;&gt;Google&lt;/h2&gt;

&lt;p&gt;Google allows to export everything you have (GMail, YouTube, etc). Note that this could be a lot of data.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://myaccount.google.com/&quot;&gt;your account&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://myaccount.google.com/data-and-privacy&quot;&gt;“Data &amp;amp; privacy”&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Scroll down until &lt;a href=&quot;https://takeout.google.com/&quot;&gt;“Download your data”&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Finally select “Create archive”.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Now you can control which data you want to export (by default data from all Google services). Once you know which data you want to export, press “Next”. Now, you can select the destination file type, the size (data will be split in files with that size as much), and where the file is put (by default a link is emailed to download it). We just use default options, and press “Create archive”. Now just wait until the you get it.&lt;/p&gt;

&lt;h2 id=&quot;gravatar&quot;&gt;Gravatar&lt;/h2&gt;

&lt;p&gt;This services offers an option to export data.&lt;/p&gt;

&lt;p&gt;Just go to &lt;a href=&quot;https://en.gravatar.com/account/export-data/&quot;&gt;“Export Data”&lt;/a&gt; option and click on “Send me my data”.&lt;/p&gt;

&lt;h2 id=&quot;lastfm&quot;&gt;LastFM&lt;/h2&gt;

&lt;p&gt;Unfortunately, they removed the option it had to export data. So we need to use an external application to export, at least, part of the user data.&lt;/p&gt;

&lt;p&gt;There’s a python2 script, &lt;a href=&quot;https://gist.github.com/bitmorse/5201491&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;lastexport.py&lt;/code&gt;&lt;/a&gt;, download it and run as&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;python lastexport.py -u &amp;lt;username&amp;gt; -o &amp;lt;output&amp;gt; [-t &amp;lt;type&amp;gt;]
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;By default it exports scrobbles, but we can change setting &lt;type&gt; to &quot;scrobbles&quot;, &quot;loved&quot; or &quot;banned&quot; (this one doesn&apos;t work anymore). Output is in CSV.&lt;/type&gt;&lt;/p&gt;

&lt;h2 id=&quot;libravatar&quot;&gt;Libravatar&lt;/h2&gt;

&lt;p&gt;It is very easy to download the data.&lt;/p&gt;

&lt;p&gt;Once logged in, click on &lt;a href=&quot;https://www.libravatar.org/accounts/export/&quot;&gt;“Download your libravatar data”&lt;/a&gt; on the main menu. It will expose your data as compressed XML file.&lt;/p&gt;

&lt;h2 id=&quot;linkedin&quot;&gt;LinkedIn&lt;/h2&gt;

&lt;p&gt;Fortunately, this service provides a way to export all your data.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to &lt;a href=&quot;https://www.linkedin.com/mypreferences/d/categories/account&quot;&gt;“Settings &amp;amp; Privacy”&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;In “Data privacy”, select &lt;a href=&quot;https://www.linkedin.com/mypreferences/d/download-my-data&quot;&gt;“Get a copy of your data”&lt;/a&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Select what to download, and press “Request archive”. An email will be sent with the link to download the data.&lt;/p&gt;

&lt;!-- ## ResearchGate --&gt;

&lt;!-- Unfortunately, I didn&apos;t found a way to export data from this service, except exporting your profile as CV. --&gt;

&lt;!-- - Go to your profile --&gt;

&lt;!-- - In the right column, at the end of the page, a button entitled &quot;Export your profile as a CV&quot; allows to export it (in Microsoft Word format). --&gt;

&lt;h2 id=&quot;mastodon&quot;&gt;Mastodon&lt;/h2&gt;

&lt;p&gt;This service allows to download your datra too.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Click on &lt;a href=&quot;https://floss.social/settings/preferences&quot;&gt;“Preferences”&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;Then click on &lt;a href=&quot;https://floss.social/settings/export&quot;&gt;“Import and export”&lt;/a&gt;, which by default selects “Data export”.&lt;/li&gt;
  &lt;li&gt;In the right side, you can request a file with your data.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;slideshare&quot;&gt;SlideShare&lt;/h2&gt;

&lt;p&gt;Go to your Account, and select Export.&lt;/p&gt;

&lt;p&gt;It provides two downlads: a JSON with your account data, and another to download a CSV with links to all the documents and presentations.&lt;/p&gt;

&lt;!-- We will use a script, [slideshare-backup](https://github.com/jasuarez/slideshare-backup.git). --&gt;

&lt;!-- ~~~ --&gt;
&lt;!-- $ virtualenv ~/ve --&gt;
&lt;!-- $ source ~/ve/bin/activate --&gt;
&lt;!-- $ pip install slideshare --&gt;
&lt;!-- $ git clone https://github.com/jasuarez/slideshare-backup.git --&gt;
&lt;!-- $ python ~/slideshare-backup/slideshare-backup -o &lt;output&gt; &lt;account&gt; --&gt;
&lt;!-- ~~~ --&gt;

&lt;h2 id=&quot;telegram&quot;&gt;Telegram&lt;/h2&gt;

&lt;p&gt;Surprinsingly it is possible to get all the Telegram data, but his requires to use &lt;a href=&quot;https://desktop.telegram.org/&quot;&gt;Telegram Desktop&lt;/a&gt; application.&lt;/p&gt;

&lt;p&gt;Once installed, go to “Settings”, then “Advanced” and at the end there is “Export Telegram data”. Select what information to export, and press “Export”.&lt;/p&gt;

&lt;p&gt;It will take some time to get the data ready.&lt;/p&gt;

&lt;h2 id=&quot;twitter&quot;&gt;Twitter&lt;/h2&gt;

&lt;p&gt;Another service that provides an easy way to export all your data.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to account “Settings”&lt;/li&gt;
  &lt;li&gt;Select “Request your archive” in “Account”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;An email will be sent with a link to download the file.&lt;/p&gt;

&lt;!-- ## WordPress --&gt;

&lt;!-- This services provides an easy way to make backups. --&gt;

&lt;!-- - Go to admin page (https://blogs.igalia.com/jasuarez/wp-admin) --&gt;
&lt;!-- - Tools &gt; Export . --&gt;
&lt;!--   - Check &quot;All Content&quot; . This creates an XML that will contain all of your posts, pages, comments, custom fields, terms, navigation menus, and custom posts. But we can also only select posts, page or media --&gt;
&lt;!--   - Press &quot;Download Export File&quot; To download everytihing --&gt;

&lt;!-- Another option is through a backup, which is a plugin (https://es.wordpress.org/plugins/wp-db-backup) --&gt;

&lt;!-- - Tools &gt; Backup. It shows a list of the internal database tables that will be saved. By default, only the core ones will be saved. But only tables can also be selected. Usually these non-core tables belong to plugins, or to old WordPress versions. --&gt;

&lt;!-- - Once we selected the tables we want to backup, we can choose to download the backup, or send through email. And press &quot;Backup now!&quot; to perform de backup. --&gt;

&lt;h2 id=&quot;xiaomi&quot;&gt;Xiaomi&lt;/h2&gt;

&lt;p&gt;Another service that provides a way to download your data.&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;Go to your &lt;a href=&quot;https://account.xiaomi.com&quot;&gt;Xiaomi Account&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;In the &lt;a href=&quot;https://account.xiaomi.com/fe/service/account/privacy&quot;&gt;“Privacy”&lt;/a&gt; section, click on “Manage your data”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;It provides different options to download your data. It takes time to get data ready (even some days or weeks), but a link with the data will be submitted to your email.&lt;/p&gt;

</description>
        <pubDate>Wed, 12 Jan 2022 00:00:00 +0100</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2022/01/12/social-backup/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2022/01/12/social-backup/</guid>
        
        <category>backup</category>
        
        
      </item>
    
      <item>
        <title>Implementing Performance Counters in V3D driver</title>
        <description>&lt;p&gt;Let me talk here about how we implemented the support for performance counters
in the &lt;a href=&quot;https://www.mesa3d.org/&quot;&gt;Mesa&lt;/a&gt; &lt;a href=&quot;https://docs.mesa3d.org/drivers/v3d.html&quot;&gt;V3D driver&lt;/a&gt;, the OpenGL driver used by the &lt;a href=&quot;https://www.raspberrypi.org/products/raspberry-pi-4-model-b/&quot;&gt;Raspberry
Pi 4&lt;/a&gt;. For reference, the implementation is very similar to the one
already available (not done by me, by the way) for the &lt;a href=&quot;https://docs.mesa3d.org/drivers/vc4.html&quot;&gt;VC4&lt;/a&gt;, OpenGL
driver for the &lt;a href=&quot;https://www.raspberrypi.org/products/raspberry-pi-3-model-b-plus/&quot;&gt;Raspberry Pi 3&lt;/a&gt; and prior devices, also part of Mesa. If
you are already familiar with how this is implemented in VC4, then this will
mostly be a refresher.&lt;/p&gt;

&lt;p&gt;First of all, what are these performance counters? Most of the processors
nowadays contain some hardware facilities to get measurements about what is
happening inside the processor. And of course graphics processors aren’t
different. In this case, the graphics chips used by Raspberry Pi devices
(manufactured by Broadcom) can record a bunch of different graphics-related
parameters: how many quads are passing or failing depth/stencil tests, how many
clock cycles are spent on doing vertex/fragment shading, hits/misses in the GPU
cache, and many others values. In fact, with the V3D driver it is possible to
measure around 87 different parameters, and up to 32 of them simultaneously.
Quite a few less in VC4, though. But still a lot.&lt;/p&gt;

&lt;p&gt;On a hardware level, using these counters is just a matter of writing and
reading some GPU registers. First, write the registers to select what we want
to measure, then a few more to start to measure, and finally read other
registers containing the results. But of course, much like we don’t expect
users to write GPU assembly code, we don’t expect users to write registers in
the GPU directly. Moreover, even the Mesa drivers such as V3D can’t interact
directly with the hardware; rather, this is done through the kernel, the one
that can use the hardware directly, through the DRM subsystem in the kernel.
For the case of V3D (and same applies to VC4, and in general to any other
driver), we have a driver in user-space (whether the OpenGL driver, V3D, or the
Vulkan driver, V3DV), and a kernel driver in the kernel-space, unsurprisingly
also called V3D. The user-space driver is in charge of translating all the
commands and options created with the OpenGL API or other API to batches of
commands to be executed by the GPU, which are submitted to the kernel driver as
DRM jobs. The kernel does the proper actions to send these to the GPU to
execute them, including touching the proper registers. Thus, if we want to
implement support for the performance counters, we need to modify the code in
two places: the kernel and the (user-space) driver.&lt;/p&gt;

&lt;h2 id=&quot;implementation-in-the-kernel&quot;&gt;Implementation in the kernel&lt;/h2&gt;

&lt;p&gt;Here we need to think about how to deal with the GPU and the registers to make
the performance counters work, as well as the API we provide to user-space to
use them. As mentioned before, the approach we are following here is the same
as the one used in the VC4 driver: &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_perfmon.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662&quot;&gt;performance counters monitors&lt;/a&gt;.
That is, the user-space driver &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n53&quot;&gt;creates&lt;/a&gt; one or more monitors,
specifying for each monitor &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n281&quot;&gt;what counters it is interested
in&lt;/a&gt; (up to 32 simultaneously, the hardware limit). The kernel
returns a &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n375&quot;&gt;unique identifier&lt;/a&gt; for each monitor, which can be used
later to do the measurement, &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n57&quot;&gt;query the results&lt;/a&gt;, and finally
&lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n55&quot;&gt;destroy it&lt;/a&gt; when done.&lt;/p&gt;

&lt;p&gt;In this case, there isn’t an explicit start/stop the measurement. Rather, every
time the driver wants to measure a job, it includes &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n141&quot;&gt;the&lt;/a&gt;
&lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/include/uapi/drm/v3d_drm.h?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n278&quot;&gt;identifier&lt;/a&gt; of the monitor it wants to use for that job, if
any. Before submitting a job to the GPU, the kernel &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_sched.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n133&quot;&gt;checks&lt;/a&gt; if
the job has a monitor identifier attached. If so, then it needs to check if the
previous job executed by the GPU was also using the same monitor identifier, in
which case it doesn’t need to do anything other than send the job to the GPU,
as the performance counters required are already enabled. If the &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_sched.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n69&quot;&gt;monitor is
different&lt;/a&gt;, then it needs first to &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_perfmon.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n74&quot;&gt;read the current
counter values&lt;/a&gt; (through proper GPU registers), adding them
to the current monitor, &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_perfmon.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n76&quot;&gt;stop the measurement&lt;/a&gt;, &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_perfmon.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n35&quot;&gt;configure
the counters&lt;/a&gt; for the new monitor, &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_perfmon.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n53&quot;&gt;start the measurement
again&lt;/a&gt;, and finally &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/tree/drivers/gpu/drm/v3d/v3d_sched.c?id=26a4dc29b74a137f45665089f6d3d633fcc9b662#n147&quot;&gt;submit&lt;/a&gt; the new job to
the GPU. In this process, if it turns out there wasn’t a monitor under
execution before, then it only needs to execute the last steps.&lt;/p&gt;

&lt;p&gt;The reason to do all this is that multiple applications can be executing at the
same time, some using (different) performance counters, and most of them
probably not using performance counters at all. But the performance counter
values of one application shouldn’t affect any other application so we need to
make sure we don’t mix up the counters between applications. Keeping the values
in their respective monitors helps to accomplish this. There is still a small
requirement in the user-space driver to help with accomplishing this, but in
general, this is how we avoid the mixing.&lt;/p&gt;

&lt;p&gt;If you want to take a look at the full implementation, it is available in a
&lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/commit/?id=26a4dc29b74a137f456&quot;&gt;single commit&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;implementation-in-the-driver&quot;&gt;Implementation in the driver&lt;/h2&gt;

&lt;p&gt;Once we have a way to create and manage the monitors, using them in the driver
is quite easy: as mentioned before, we only need to &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L222&quot;&gt;create a
monitor&lt;/a&gt; with the counters we are interested in and
&lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_job.c#L507&quot;&gt;attach it&lt;/a&gt; to the job to be submitted to the kernel. In order to
make things easier, we keep a &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_context.h#L307&quot;&gt;mirror-like version&lt;/a&gt; of the
monitor inside the driver.&lt;/p&gt;

&lt;p&gt;This approach is adequate when you are developing the driver, and you can add
code directly on it to check performance. But what about the final user, who is
writing an OpenGL application and wants to check how to improve its
performance, or check any bottleneck on it? We want the user to have a way to
use OpenGL for this.&lt;/p&gt;

&lt;p&gt;Fortunately, there is in fact a way to do this through OpenGL: the
&lt;a href=&quot;https://www.khronos.org/registry/OpenGL/extensions/AMD/AMD_performance_monitor.txt&quot;&gt;GL_AMD_performance_monitor&lt;/a&gt; extension. This OpenGL
extension provides an API to query what counters the hardware supports, to
create monitors, to start and stop them, and to retrieve the values. It looks
very similar to what we have described so far, except for an important
difference: the user needs to start and stop the monitors explicitly. We will
explain later why this is necessary. But the key point here is that when we
start a monitor, this means that from that moment on, until stopping it, any
job created and submitted to the kernel will have the identifier of that
monitor attached. This implies that only &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L209&quot;&gt;one monitor&lt;/a&gt; can
be enabled in the application at the same time. But this isn’t a problem, as
this restriction is part of the extension.&lt;/p&gt;

&lt;p&gt;Our driver does not implement this API directly, but through
&lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query.c&quot;&gt;“queries”&lt;/a&gt;, which are used then by the Gallium subsystem in Mesa
to implement the extension. For reference, the V3D driver (as well as the VC4)
is implemented as part of the Gallium subsystem. The &lt;a href=&quot;https://docs.mesa3d.org/gallium/index.html&quot;&gt;Gallium&lt;/a&gt; part
basically handles all the hardware-independent OpenGL functionality, and just
requires the driver hook functions to be implemented by the driver. If the
driver implements the proper functions, then Gallium exposes the right
extension (in this case, the GL_AMD_performance_monitor extension).&lt;/p&gt;

&lt;p&gt;For our case, it requires the driver to implement functions to return which
&lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L160&quot;&gt;counters are available&lt;/a&gt;, to &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L310&quot;&gt;create&lt;/a&gt; or
&lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L183&quot;&gt;destroy&lt;/a&gt; a query (in this case, the query is the same as the
monitor), &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L202&quot;&gt;start&lt;/a&gt; and &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L244&quot;&gt;stop&lt;/a&gt; the query, and once it is
finished, &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L272&quot;&gt;to get the results
back&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;At this point, I would like to explain a bit better what it implies to stop the
monitor and get the results back. As explained earlier, stopping the monitor or
query means that from that moment on, any new job submitted to the kernel (and
thus to the GPU) won’t contain a performance monitor identifier attached, and
hence won’t be measured. But it is important to know that the driver submits
jobs to the kernel to be executed at its own pace, but these aren’t executed
immediatly; the GPU needs time to execute the jobs, and so the kernel puts the
arriving jobs in a queue, to be submitted to the GPU. This means when the user
stops the monitor, there could be still jobs in the queue that haven’t been
executed yet and are thus pending to be measured.&lt;/p&gt;

&lt;p&gt;And how do we know that the jobs have been executed by the GPU? The hook
function to implement getting the query results has a &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L272&quot;&gt;“wait”
parameter&lt;/a&gt;, which tells if the function &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L282&quot;&gt;needs to wait&lt;/a&gt;
for all the pending jobs to be measured to be executed or not. If it doesn’t
but there are pending jobs, then it just returns telling the caller this fact.
This allows to do other work meanwhile and query again later, instead of
becoming blocked waiting for all the jobs to be executed. This is implemented
through sync objects. Every time a job is sent to the kernel, there’s a &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_context.h#L516&quot;&gt;sync
object&lt;/a&gt; that is used to signal when the job has finished
executing. This is mainly used to have a way to synchronize the jobs. In our
case, when the user finalizes the query we &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/blob/685281278ebd39114c3007e76443eaaa66cf833/src/gallium/drivers/v3d/v3d_query_perfcnt.c#L263&quot;&gt;save this fence&lt;/a&gt; for
the last submitted job, and we use it to know when this last job has been
executed.&lt;/p&gt;

&lt;p&gt;There are quite a few details I’m not covering here. If you are interested
though, you can take a look at the &lt;a href=&quot;https://gitlab.freedesktop.org/mesa/mesa/-/merge_requests/10666&quot;&gt;merge request&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;gallium-hud&quot;&gt;Gallium HUD&lt;/h2&gt;

&lt;p&gt;So far we have seen how the performance counters are implemented, and how to
use them. In all the cases it requires writing code to create the
monitor/query, start/stop it, and querying back the results, either in the
driver itself or in the application through the GL_AMD_performance_monitor
extension&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;.&lt;/p&gt;

&lt;p&gt;But what if we want to get some general measurements without adding code to the
application or the driver? Fortunately, there is an &lt;a href=&quot;https://docs.mesa3d.org/envvars.html&quot;&gt;environmental
variable&lt;/a&gt; &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;GALLIUM_HUD&lt;/code&gt; that, when correctly, will show on top of the
application some graphs with the measured counters.&lt;/p&gt;

&lt;p&gt;Using it is very easy; set it to &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;help&lt;/code&gt; to know how to use it, as well as to
get a list of the available counters for the current hardware.&lt;/p&gt;

&lt;p&gt;As example:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ env GALLIUM_HUD=L2T-CLE-reads,TLB-quads-passing-z-and-stencil-test,QPU-total-active-clk-cycles-vertex-coord-shading scorched3d
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;You will see:&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://blogs.igalia.com/jasuarez/assets/post_images/2021-09-01-v3d-gallium-hud.png&quot; alt=&quot;Performance Counters in Scorched 3D&quot; class=&quot;center-block&quot; /&gt;&lt;/p&gt;

&lt;p&gt;Bear in mind that to be able to use this you will need a kernel that supports
performance counters for V3D. At the moment of writing this, no kernel has been
released yet with this support. If you don’t want to wait for it, you can
download the &lt;a href=&quot;https://cgit.freedesktop.org/drm/drm/commit/?id=26a4dc29b74a137f456&quot;&gt;patch&lt;/a&gt;, apply it to your &lt;a href=&quot;https://github.com/raspberrypi/linux&quot;&gt;raspberry pi
kernel&lt;/a&gt; (which has been tested in the 5.12 branch), &lt;a href=&quot;https://www.raspberrypi.org/documentation/computers/linux_kernel.html#building&quot;&gt;build and
install it&lt;/a&gt;.&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;All this is for the case of using OpenGL; if your application uses
  Vulkan, there are other similar extensions, which are not yet implemented
  in our V3DV driver at the moment of writing this post. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Wed, 01 Sep 2021 00:00:00 +0200</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2021/09/01/v3d-perfcounters/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2021/09/01/v3d-perfcounters/</guid>
        
        <category>graphics</category>
        
        <category>raspberrypi</category>
        
        
        <category>Freedesktop</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Grilo, Travis CI and Containers</title>
        <description>&lt;p&gt;Good news! Finally, we are using containers in &lt;a href=&quot;https://travis-ci.org&quot;&gt;Travis CI&lt;/a&gt;
for &lt;a href=&quot;https://wiki.gnome.org/Projects/Grilo&quot;&gt;Grilo&lt;/a&gt;!. Something I was trying for a while, but we achived it
now. I must say that a &lt;a href=&quot;https://www.bassi.io/articles/2017/02/11/epoxy&quot;&gt;post Bassi wrote&lt;/a&gt; was the trigger for
getting into this. So all my kudos to him!&lt;/p&gt;

&lt;p&gt;In this post I’ll explain the history behind using Travis CI for Grilo continuous integration.&lt;/p&gt;

&lt;h2 id=&quot;the-origin&quot;&gt;The origin&lt;/h2&gt;

&lt;p&gt;It all started when one day exploring how GitHub integrates with other services,
I discovered &lt;a href=&quot;https://travis-ci.org&quot;&gt;Travis CI&lt;/a&gt;. As you may know, Travis is a continuous
integration service that checks every commit from a project in GitHub, and for
each one it starts a testing process. Roughly, it starts a “virtual machine”&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;
running Ubuntu&lt;sup id=&quot;fnref:2&quot;&gt;&lt;a href=&quot;#fn:2&quot; class=&quot;footnote&quot; rel=&quot;footnote&quot; role=&quot;doc-noteref&quot;&gt;2&lt;/a&gt;&lt;/sup&gt;, clones the repository at that commit under test, and runs a
set of commands defined in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.travis.yml&lt;/code&gt; file, located in the same project
GitHub repository. In that file, beside the steps to execute the tests, it
contains the instructions about how to build the project, as well as which
dependencies are required.&lt;/p&gt;

&lt;p&gt;Note that before Travis, instead of a continuous integration system in Grilo we
had a ‘discontinuous’ one: run the checks manually, from time to time. So we
could have a commit entering a bug, and we won’t realize until we run the next
check, which can happen way later. Thus, when I found Travis, I thought it would
be a good idea to use it.&lt;/p&gt;

&lt;p&gt;Setting up &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.travis.yml&lt;/code&gt; for Grilo was quite easy: in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;before_install&lt;/code&gt;
section we just use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;apt-get&lt;/code&gt; to install all requirements: &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;libglib2.0-dev&lt;/code&gt;,
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;libxml2-dev&lt;/code&gt;, and so on. And then, in the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;script&lt;/code&gt; section we run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;autogen.sh&lt;/code&gt;
and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;make&lt;/code&gt;. If nothing fails, we consider the test is successful. We do not run
any specific test because we don’t have any in Grilo.&lt;/p&gt;

&lt;p&gt;For the plugins, the same steps: install dependencies, configure and build the
plugins. In this case, we also run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;make check&lt;/code&gt;, so tests are run always. Again,
if nothing fails Travis gives us a green light. Otherwise, a red one. The status
is shown in the &lt;a href=&quot;https://wiki.gnome.org/Projects/Grilo&quot;&gt;main web page&lt;/a&gt;. Also, if the test fail, an email is sent
to the commit author.&lt;/p&gt;

&lt;p&gt;Now, this has a small problem when testing plugins: they require Grilo, and we
were relying in the package provided by Ubuntu (it is listed in the
dependencies). But what happens if the current commit is using a feature that
was added in Grilo upstream, but not released yet? One option could be cloning
Grilo core, building and installing it, before the plugins, and then compiling
the plugins, depending on this version. This means that for each commit in
plugins, we need to build two projects, adding lot of complexity in the Travis
file. So we decided to go with a different approach: just create a Grilo package
with the required unreleased Grilo core version (only for testing), and put it
in a &lt;a href=&quot;https://launchpad.net/~grilo-team/+archive/ubuntu/travis&quot;&gt;PPA&lt;/a&gt;. Then we can add that PPA in our &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.travis.yml&lt;/code&gt; file and
use that version instead.&lt;/p&gt;

&lt;p&gt;A similar problem happens with Grilo itself: sometimes we require a specific
version of a package that is not available in the Ubuntu version used by Travis
(Ubuntu 12.04). So we need to backport it from a more recent Ubuntu version, and
add it in the same PPA.&lt;/p&gt;

&lt;p&gt;Summing up, our &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;.travis.yml&lt;/code&gt; files just add the PPA, install the required
dependencies, build and test it. You can take a look at
the &lt;a href=&quot;https://git.gnome.org/browse/grilo/tree/.travis.yml?id=ce1fa94cc8759616a6aacfe94c09ffbf3432f7c0&quot;&gt;core&lt;/a&gt; and &lt;a href=&quot;https://git.gnome.org/browse/grilo-plugins/tree/.travis.yml?id=f93e959f0243f5207dd23bbed21b8be20dfa76b4&quot;&gt;plugins&lt;/a&gt;
file.&lt;/p&gt;

&lt;h2 id=&quot;travis-and-the-peter-pan-syndrome&quot;&gt;Travis and the Peter Pan syndrome&lt;/h2&gt;

&lt;p&gt;Time passes, we were adding more features, new plugins, fixing problem, adding
new requirements or bumping up the required versions… but Travis continues
using Ubuntu 12.04. My first thoughts were &lt;em&gt;“OK, maybe Travis wants to rely only
in LTS releases”&lt;/em&gt;. So we need to wait until the next LTS is released, and
meanwhile backporting everything we need. No need to say that doing this becomes
more and more complicated as time is passing. Sometimes backporting a single
dependency requires to backport a lot of other dependencies, which can end up in
a bloody nightmare. &lt;em&gt;“Only for a while, until the new LTS is released”&lt;/em&gt;,
repeated to myself.&lt;/p&gt;

&lt;p&gt;And good news! Ubuntu 14.04, the new LTS, is released. But you know what? Travis
is not updated, and still uses the old LTS!. What the hell!&lt;/p&gt;

&lt;p&gt;Moreover, two years later after this release, Ubuntu 16.04 LTS is also released,
and Travis still uses 12.04!&lt;/p&gt;

&lt;p&gt;At  that moment,  backporting were  so  complex that  basically I  gave up.  And
Continuous Integration was basically broken.&lt;/p&gt;

&lt;h2 id=&quot;travis-and-the-containers&quot;&gt;Travis and the containers.&lt;/h2&gt;

&lt;p&gt;And we were under this broken status until I read Travis was adding support for
containers. &lt;em&gt;“This is what we need”&lt;/em&gt;. But the truth is that even I knew that it
would fix all the problems, I wasn’t very sure how to use the new feature. I
tried several approaches, but I wasn’t happy with none of them.&lt;/p&gt;

&lt;p&gt;Until &lt;a href=&quot;https://twitter.com/ebassi&quot;&gt;Emmanuele Bassi&lt;/a&gt; published
a &lt;a href=&quot;https://www.bassi.io/articles/2017/02/11/epoxy&quot;&gt;post about using Meson in Epoxy&lt;/a&gt;. That post included an
explanation about using Docker containers in Travis, which solved all the doubts
I had, and allowed me to finally move to use containers. So again, thank you,
Emmanuele!&lt;/p&gt;

&lt;p&gt;What’s the idea? First, we have created a &lt;a href=&quot;https://hub.docker.com/r/grilofw/grilo&quot;&gt;Docker container&lt;/a&gt; that
has preinstalled all the requirements to build Grilo and the plugins. We tagged
this image as &lt;em&gt;base&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;When Travis is going to test Grilo, we instruct Travis to build a new container,
based on &lt;em&gt;base&lt;/em&gt;, that builds and installs Grilo. If everything goes fine, then
our continous integration is successful, and Travis gives green light. Otherwise
it gives red light. Exactly like it happened in the old approach.&lt;/p&gt;

&lt;p&gt;But we don’t stop here. If everything goes fine, we push the new container into
Docker register, tagging it as &lt;em&gt;core&lt;/em&gt;. Why? Because this is the image we will
use for building the plugins.&lt;/p&gt;

&lt;p&gt;And in the case of plugins we do exactly the same as in the core. But this time,
instead of relying in the &lt;em&gt;base&lt;/em&gt; image, we rely in the &lt;em&gt;core&lt;/em&gt; one. This way, we
always use a version that has an up-to-date version of Grilo, so we don’t need
to package it when introducing new features. Only if either Grilo or the plugins
require a &lt;strong&gt;new dependency&lt;/strong&gt; we need to build a new &lt;em&gt;base&lt;/em&gt; image and push
it. That’s all.&lt;/p&gt;

&lt;p&gt;Also, as a plus, instead of discarding the container that contains the plugins,
we push it in Docker, tagged as &lt;em&gt;latest&lt;/em&gt;. So anyone can just pull it with Docker
to have a container to run and test Grilo and all the plugins.&lt;/p&gt;

&lt;p&gt;If interested, you can take a look at the &lt;a href=&quot;https://git.gnome.org/browse/grilo/tree/.travis.yml?id=fcdcd29b1bc6aec03f57dac39b7b5a7df60c8cae&quot;&gt;core&lt;/a&gt;
and &lt;a href=&quot;https://git.gnome.org/browse/grilo-plugins/tree/.travis.yml?id=8a8f1a829cc222230ca16aa0a5f522dfb394225d&quot;&gt;plugins&lt;/a&gt; files to check how it looks like.&lt;/p&gt;

&lt;p&gt;Oh! Last but not least. This also helped us to test the building both
using &lt;a href=&quot;https://en.wikipedia.org/wiki/GNU_Build_System&quot;&gt;Autotools&lt;/a&gt; and &lt;a href=&quot;http://mesonbuild.com&quot;&gt;Meson&lt;/a&gt;, both supported in Grilo. Which
is really awesome.&lt;/p&gt;

&lt;p&gt;Summing up, moving to containers provides a lot of flexibility, and make things
quite easier.&lt;/p&gt;

&lt;p&gt;Please, leave any comment or question either in &lt;a href=&quot;https://www.facebook.com/jsuarezr/posts/1633824283311177&quot;&gt;Facebook&lt;/a&gt; or &lt;a href=&quot;https://plus.google.com/+jasuarez/posts/df1d8wufjZj&quot;&gt;Google+&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;span style=&quot;color:green&quot;&gt;&lt;em&gt;UPDATE (26/11/2017):&lt;/em&gt; comments are available
again.&lt;/span&gt;&lt;/p&gt;

&lt;div class=&quot;footnotes&quot; role=&quot;doc-endnotes&quot;&gt;
  &lt;ol&gt;
    &lt;li id=&quot;fn:1&quot;&gt;
      &lt;p&gt;Let’s call Virtual Machine, container, whatever. In this context it doesn’t matter. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
    &lt;li id=&quot;fn:2&quot;&gt;
      &lt;p&gt;Ubuntu 12.04 LTS, to be exact. &lt;a href=&quot;#fnref:2&quot; class=&quot;reversefootnote&quot; role=&quot;doc-backlink&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
    &lt;/li&gt;
  &lt;/ol&gt;
&lt;/div&gt;
</description>
        <pubDate>Thu, 09 Mar 2017 00:00:00 +0100</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2017/03/09/grilo-travis-containers/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2017/03/09/grilo-travis-containers/</guid>
        
        <category>grilo</category>
        
        <category>multimedia</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>New Year, New Blog!</title>
        <description>&lt;p&gt;A new year has come! And with the new year, the usual new proposals: be better
person, do more exercise, blog more, … :smile:&lt;/p&gt;

&lt;p&gt;More than two years without blogging. Lot of time. So let’s start with the last
proposal.&lt;/p&gt;

&lt;p&gt;But I also wanted to do a clean restart, and entirely reboot my blog. This is
something I was thinking of during the last months, specially after &lt;a href=&quot;https://blogs.igalia.com/mrego/&quot;&gt;Rego&lt;/a&gt; moved
his blog to &lt;a href=&quot;http://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt;. I really like how clean and simple it looks like.&lt;/p&gt;

&lt;p&gt;I have (or better had) my blog hosted in a WordPress server. &lt;a href=&quot;https://wordpress.org&quot;&gt;WordPress&lt;/a&gt; is a
very popular and also, why not, very good system to create and manage website
content and blogs. It’s entirely an online service, where you create your
content, and it’s served to the world. And there are tons of plugins that
practically allow to do whatever you need.&lt;/p&gt;

&lt;p&gt;But to be honest, it seems too much cathedral for me. I just wanted something
simpler, that serves the content I have, and nothing else. Specially, after
reading lot of times about bugs and different security problems it has
(fortunately, most of them fixed quickly). Nevertheless, I’m very lucky in this
topic, because the WordPress server I was using is hosted by &lt;a href=&quot;http://www.igalia.com&quot;&gt;Igalia&lt;/a&gt;, and their
sysadmins are awesome professional people that keep everything updated. But
still, can’t avoid the feeling that there’s still a risk, and that implies
spending time from our sysadmins to maintain it.&lt;/p&gt;

&lt;p&gt;Besides that, there were other reasons that made me to consider a static blog
system (again, these reasons are entirely for my personal case).&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;As said, too much big just to keep a simple blog. Using a static blog seems
more suitable.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I’m a FLOSS person, and as such, I really like to know what’s going on under
the hood. And in this case it is something I don’t really know. Yes, WordPress
is a free software, and source code is available out there. What I mean is
that I can’t go to the server and freely change things, because I’m not the
sysadmin. If I want to use a modified version of a plugin, I can’t do
it. Basically, I see it as a service like others in the cloud, that you just
use and full stop. On the other hand, static blogging is different: you have
the source code, that you can inspect or modify (like I did for this blog),
and that runs in your own host. And once the final content is generated, can
be served by any webserver. No need to install anything at all in the server.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;I’m a developer person, and as such, I like the process of &lt;em&gt;writing&lt;/em&gt; something
in clear text, &lt;em&gt;compile&lt;/em&gt; it, and get a &lt;em&gt;final result&lt;/em&gt; I can use. Which is
something that perfectly matches with the way of doing static blogging: you
write your posts in clear text (usually, in &lt;a href=&quot;http://daringfireball.net/projects/markdown&quot;&gt;MarkDown&lt;/a&gt; or &lt;a href=&quot;http://docutils.sourceforge.net/rst.html&quot;&gt;reStructuredText&lt;/a&gt;),
you run a &lt;a href=&quot;https://www.staticgen.com&quot;&gt;generator&lt;/a&gt; which transforms the posts in HTML + CSS, and you get
everything inside a directory. You only need to copy the generated content in
the proper webserver. Nothing else.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;A very important one for me, that I think triggered the change from WordPress:
I really like &lt;a href=&quot;https://git-scm.com&quot;&gt;Git&lt;/a&gt;. And really missed to have my posts under with Git. But
now, as posts are just clear text files, I can easily handle them with Git: I
can push, amend, branch, and even accept fixes through pull requests!&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Probably there are more reasons that made me to switch from WordPress. But those
above are enough.&lt;/p&gt;

&lt;p&gt;So the next question was: which one? There are lot of
different &lt;a href=&quot;https://www.staticgen.com&quot;&gt;static site generators&lt;/a&gt;. Lot time ago I had done some shy
attempts with &lt;a href=&quot;https://blog.getpelican.com&quot;&gt;Pelican&lt;/a&gt; first, and with &lt;a href=&quot;http://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt; later. As Jekyll is the most
popular one, I decided to go with it. It has lot of plugins that covers all my
needs, and very big community. For sure it is not the fastest one, but I don’t
mind spending some minutes if required to generate content. The good thing is
that in the future I can move to a different generator if needed, and just use
the same posts.&lt;/p&gt;

&lt;p&gt;Once I decided to use Jekyll, a crucial question came up: which theme? Themes
define how your content looks alike. For sure, I wanted something simple,
like &lt;a href=&quot;https://blogs.igalia.com/mrego/&quot;&gt;Rego’s blog&lt;/a&gt;, but not the same theme. I’m not a designer, so doing
it myself from scratch was discarded. I could buy a theme from a professional
designer, but I think it is too early to do that at this moment. Maybe in the
future. Thus, I spent several days trying different free themes, checking how
they look, until finding one that suited what I
wanted: &lt;a href=&quot;https://github.com/dirkfabisch/mediator&quot;&gt;Mediator theme&lt;/a&gt;. I used it as a starting point, fixing some
problems (most of those fixes were merged in the original theme), doing some
modifications to adapt to my own wishes, and voilà! What you see here is the
final result.&lt;/p&gt;

&lt;p&gt;What’s next? Very likely, search for a way of allowing comments. This is not
handled natively in Jekyll, but usually through third-party services, like
&lt;a href=&quot;https://disqus.com&quot;&gt;Disqus&lt;/a&gt; or &lt;a href=&quot;http://www.discourse.org&quot;&gt;Discourse&lt;/a&gt;. I could use any of them, but again, I would be in the
same situation as with WordPress.&lt;/p&gt;

&lt;p&gt;So for now, I’ll leave a couple of links in &lt;a href=&quot;https://www.facebook.com/jsuarezr/posts/1573891925971080&quot;&gt;Facebook&lt;/a&gt; and &lt;a href=&quot;https://plus.google.com/+jasuarez/posts/e2v3YD3mGbL&quot;&gt;Google+&lt;/a&gt; where
people can leave comments.&lt;/p&gt;

&lt;p&gt;Happy new year!&lt;/p&gt;

&lt;p&gt;&lt;span style=&quot;color:green&quot;&gt;&lt;em&gt;UPDATE (26/11/2017):&lt;/em&gt; comments are available
again.&lt;/span&gt;&lt;/p&gt;

</description>
        <pubDate>Thu, 12 Jan 2017 00:00:00 +0100</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2017/01/12/new-year-new-blog/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2017/01/12/new-year-new-blog/</guid>
        
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Highlights in Grilo 0.2.11 (and Plugins 0.2.13)</title>
        <description>&lt;p&gt;Hello, readers!&lt;/p&gt;

&lt;p&gt;Some weeks ago we released a new version of &lt;a href=&quot;https://wiki.gnome.org/Projects/Grilo&quot;&gt;Grilo&lt;/a&gt; and the Plugins set (yes, it
sounds like a 70’s music group :smile:). You can read the
announcement &lt;a href=&quot;https://mail.gnome.org/archives/grilo-list/2014-August/msg00000.html&quot;&gt;here&lt;/a&gt; and &lt;a href=&quot;https://mail.gnome.org/archives/grilo-list/2014-August/msg00001.html&quot;&gt;here&lt;/a&gt;. If you are more
curious about all the detailed changes done, you can take a look at the
Changelog &lt;a href=&quot;https://download.gnome.org/sources/grilo/0.2/grilo-0.2.11.changes&quot;&gt;here&lt;/a&gt; and &lt;a href=&quot;https://download.gnome.org/sources/grilo-plugins/0.2/grilo-plugins-0.2.13.changes&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;But even when you can read that information in the above links, it is always a
pleasure if someone highlights what are the main changes. So let’s go!&lt;/p&gt;

&lt;h2 id=&quot;launch-tool&quot;&gt;Launch Tool&lt;/h2&gt;

&lt;p&gt;Regarding the core system, among the typical bug fixes, I would highlight a new
tool: &lt;strong&gt;grl-launch&lt;/strong&gt;. This tool, as others, got inspiration
from &lt;a href=&quot;http://gstreamer.freedesktop.org&quot;&gt;GStreamer&lt;/a&gt; &lt;a href=&quot;http://docs.gstreamer.com/display/GstSDK/gst-launch&quot;&gt;gst-launch&lt;/a&gt;. So far, when you wanted to
do some operation in Grilo, like performing a search in YouTube or getting the
title of a video on disk, the recommended way was using Grilo Test UI. This is a
basic application that allows you to perform the typical operations in Grilo,
like browsing or searching, and everthing from a graphical interface. The
problem is that this tool is not flexible enough, so you can’t control all the
details you could require. And it is also useful to visually check the results,
but not to export the to manage with another tool.&lt;/p&gt;

&lt;p&gt;So while the Test UI is still very useful, to cover the other cases we have
grl-launch. It is a command-line based tool that allows you to perform most of
the operations allowed in Grilo, with a great degree of control. You can browse,
search, solve details from a Grilo media element, …, with a great control: how
many elements to skip or return, the metadata keys (title, author, album, …) to
retrieve, flags to use, etc.&lt;/p&gt;

&lt;p&gt;And on top of that, the results can be exported directly to a &lt;a href=&quot;http://en.wikipedia.org/wiki/Comma-separated_values&quot;&gt;CSV&lt;/a&gt; file so it
can be loaded later in a spreadsheet.&lt;/p&gt;

&lt;p&gt;As example, getting the 10 first trailers from &lt;a href=&quot;http://trailers.apple.com&quot;&gt;Apple’s iTunes Movie&lt;/a&gt;
Trailers site:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;$ grl-launch-0.2 browse -c 10 -k title,url grl-apple-trailers
23 Blast,http://trailers.apple.com/movies/independent/23blast/23blast-tlr_h480p.mov
A Most Wanted Man,http://trailers.apple.com/movies/independent/amostwantedman/amostwantedman-tlr1_h480p.mov
ABC&apos;s of Death 2,http://trailers.apple.com/movies/magnolia_pictures/abcsofdeath2/abcsofdeath2-tlr3_h480p.mov
About Alex,http://trailers.apple.com/movies/independent/aboutalex/aboutalex-tlr1b_h480p.mov
Addicted,http://trailers.apple.com/movies/lionsgate/addicted/addicted-tlr1_h480p.mov
&quot;Alexander and the Terrible, Horrible, No Good, Very Bad Day&quot;,http://trailers.apple.com/movies/disney/alexanderterribleday/alexanderterribleday-tlr1_h480p.mov
Annabelle,http://trailers.apple.com/movies/wb/annabelle/annabelle-tlr1_h480p.mov
Annie,http://trailers.apple.com/movies/sony_pictures/annie/annie-tlr2_h480p.mov
Are You Here,http://trailers.apple.com/movies/independent/areyouhere/areyouhere-tlr1_h480p.mov
As Above / So Below,http://trailers.apple.com/movies/universal/asabovesobelow/asabovesobelow-tlr1_h480p.mov
10 results
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;As said, if you re-direct the output to a file and you import it from a
spreadsheet program as CSV you will read it better.&lt;/p&gt;

&lt;h2 id=&quot;dleynaupnp-plugin&quot;&gt;dLeyna/UPnP plugin&lt;/h2&gt;

&lt;p&gt;Regarding the plugins, here is where the fun takes place. Almost all plugins
were touched, in some way or other. In most cases, for fixing bugs. But there
are other changes I’d like to highlight. And among them, UPnP is one that
suffered biggest changes.&lt;/p&gt;

&lt;p&gt;Well, strictly speaking, there is no more UPnP plugin. Rather, it was replaced
by new dLeyna plugin, written mainly by &lt;a href=&quot;http://nerd.ocracy.org/em&quot;&gt;Emanuele Aina&lt;/a&gt;. From an user point of
view, there shouldn’t be big differences, as this new plugin also provides
access to UPnP/DLNA sources. So where are the differences?&lt;/p&gt;

&lt;p&gt;First off, let’s specify what is &lt;a href=&quot;https://01.org/dleyna&quot;&gt;dLeyna&lt;/a&gt;. So far, if you want to interact with
a UPnP source, either you need to deal with the protocol, or use some low-level
library, like &lt;a href=&quot;https://wiki.gnome.org/Projects/GUPnP&quot;&gt;&lt;em&gt;gupnp&lt;/em&gt;&lt;/a&gt;. This is what the UPnP plugin was doing. Still it
is a rather low-level API, but higher and better than dealing with the raw
protocol.&lt;/p&gt;

&lt;p&gt;On the other hand, dLeyna, written by
the &lt;a href=&quot;https://01.org&quot;&gt;Intel Open Source Technology Center&lt;/a&gt;, wraps the UPnP sources with a
D-Bus layer. Actually,not only sources, but also UPnP media renderers and
controllers, though in our case we are only interested in the UPnP
sources. Thanks to dLeyna, you don’t need any more to interact with low-level
UPnP, but with a higher D-Bus service layer. Similar to the way we interact with
other services in GNOME or in other platforms. This makes easier to browser or
search UPnP sources, and allows us to add new features. dLeyna also hides some
details specific to each UPnP server that are of no interest for us, but we
would need to deal with in case of using a lower level API. The truth is that
though UPnP is quite well specified, each implementation doesn’t follow it at
100%: there are always slight differences that create nasty bugs. In this case,
dLeyna acts (or should act) as a protection, dealing itself with those
differences.&lt;/p&gt;

&lt;p&gt;And what is needed to use this new plugin? Basically, having dleyna-service
D-Bus installed. When the plugin is started, it wakes up the service, which will
expose all the available UPnP servers in the network, and the plugin would
expose them as Grilo sources. Everything as it was happening with the previous
UPnP source.&lt;/p&gt;

&lt;p&gt;In any case, I still keep a &lt;a href=&quot;https://github.com/jasuarez/grilo-upnp-plugin&quot;&gt;copy of the old UPnP plugin&lt;/a&gt; for
reference, in case someone want to use it or take a look. It is in
“unmaintained” mode, so try to use the new dLeyna plugin instead.&lt;/p&gt;

&lt;h2 id=&quot;lua-factory-plugin&quot;&gt;Lua Factory plugin&lt;/h2&gt;

&lt;p&gt;There isn’t big changes here, except fixes. But I want to remark it here because
it is where most activity is happening. I must thank &lt;a href=&quot;http://www.hadess.net&quot;&gt;Bastien&lt;/a&gt; and &lt;a href=&quot;http://www.victortoso.com&quot;&gt;Victor&lt;/a&gt; for
the work they are doing here. Just to refresh, this plugin allows to execute
sources written in &lt;a href=&quot;http://www.lua.org&quot;&gt;Lua&lt;/a&gt;. That is, instead of writing your sources in GObject/C,
you can use Lua. The Lua Factory plugin will load and run them. Writing plugins
in Lua is a pleasure, as it allows to focus on fixing the real problems and
leave the boiler plate details to the factory. Honestly, if you are considering
writing a new source, I would really think about writing it in Lua.&lt;/p&gt;

&lt;p&gt;And that’s all! It is a longer post than usual, but it is nice to explain what’s
going on in Grilo. And remember, if you are considering using Grilo in your
product, don’t hesitate to &lt;a href=&quot;http://www.igalia.com/contact&quot;&gt;contact with us&lt;/a&gt;.&lt;/p&gt;

</description>
        <pubDate>Mon, 29 Sep 2014 00:00:00 +0200</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2014/09/29/highlights-in-grilo-0-2-11/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2014/09/29/highlights-in-grilo-0-2-11/</guid>
        
        <category>grilo</category>
        
        <category>multimedia</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Another year, another GUADEC</title>
        <description>&lt;p&gt;It’s 2014, and like previous years:&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;https://2014.guadec.org&quot;&gt;&lt;img src=&quot;https://blogs.igalia.com/jasuarez/assets/post_images/2014-07-24-guadec2014.png&quot; alt=&quot;GUADEC 2014&quot; class=&quot;center-block&quot; /&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This time I won’t give any talk, just relax and
enjoy &lt;a href=&quot;http://www.igalia.com/nc/igalia-247/news/item/meet-us-at-guadec-2014-strasbourg-july-26-august-1&quot;&gt;talks from others&lt;/a&gt;, and hope &lt;a href=&quot;http://www.strasbourg.eu&quot;&gt;Strasbourg&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;And what is more important, meet those hackers you interact with frequently, and
maybe share some beers.&lt;/p&gt;

&lt;p&gt;So if you go there, and you want to have a nice chat with me, or talk about
&lt;a href=&quot;https://wiki.gnome.org/Projects/Grilo&quot;&gt;Grilo&lt;/a&gt; project, don’t hesitate to do it. &lt;a href=&quot;http://www.igalia.com&quot;&gt;Igalia&lt;/a&gt;, which is kindly sponsoring
my attendance, will have a place there during the core days, so likely you could
find me around or ask anyone there for me.&lt;/p&gt;

&lt;p&gt;Enjoy!&lt;/p&gt;

</description>
        <pubDate>Thu, 24 Jul 2014 00:00:00 +0200</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2014/07/24/another-year-another-guadec/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2014/07/24/another-year-another-guadec/</guid>
        
        <category>guadec</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Yum Search Extended</title>
        <description>&lt;p&gt;Hi again! Let me tell you something. I’m a &lt;a href=&quot;https://fedoraproject.org&quot;&gt;Fedora&lt;/a&gt; user since several releases
ago, probably since Fedora 13 or 14.&lt;/p&gt;

&lt;p&gt;Before that, I was using &lt;a href=&quot;https://fedoraproject.org&quot;&gt;Ubuntu&lt;/a&gt;, but decided to switch to Fedora for several
reasons that are not worth explaining here. In any case, after switching to
Fedora there was something that I was missing quite a lot:
the &lt;a href=&quot;https://wiki.debian.org/Aptitude&quot;&gt;aptitude package manager&lt;/a&gt;. aptitude is a deb package manager,
similar to &lt;a href=&quot;https://wiki.debian.org/Apt&quot;&gt;apt&lt;/a&gt;. What I really like about aptitude is its flexibility when
searching packages.&lt;/p&gt;

&lt;p&gt;While apt or &lt;a href=&quot;http://yum.baseurl.org&quot;&gt;yum&lt;/a&gt; allows to specify the search term, they just get all the
packages matching the search text, but they don’t allow you where to search. Do
you want to get only packages that are not installed? Or do you just remember
the package had &lt;em&gt;python&lt;/em&gt; in the name, and part of the description? With aptitude
this is not a problem, as it allows you to specify such search expressions.&lt;/p&gt;

&lt;p&gt;Though search in yum is not so flexible, as far as I know, it has a nice
feature: it allows &lt;a href=&quot;http://yum.baseurl.org/wiki/WritingYumPlugins&quot;&gt;plugins&lt;/a&gt; to implement new features. So several
months ago I wrote a plugin to mimic the aptitude search
flexibility: &lt;a href=&quot;https://github.com/jasuarez/yum-plugin-searchex&quot;&gt;&lt;strong&gt;yum searchex (search extended)&lt;/strong&gt;&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;It is worth saying that I didn’t want to imitate the full aptitude
functionality; only those features that I really missed from Ubuntu.&lt;/p&gt;

&lt;p&gt;The basic idea is specifying for each term where to search. This is done by
prefixing the text with &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~&lt;/code&gt; and a letter that expresses where to search. In some
cases, the text to search is not needed. For instance, to search only in the
list of installed packages, we would use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;~i&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;The full list of the available options can be found in
the &lt;a href=&quot;https://github.com/jasuarez/yum-plugin-searchex/blob/master/README.md&quot;&gt;project forge&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;As an example is worth a thousand words, let’s show how to search a package that
we know it contains &lt;em&gt;python&lt;/em&gt; in the name, it is not installed, and also we
remember it has something to do with KDE:&lt;/p&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;yum searchex ~apython~dKDE
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Hope this plugin is as useful for you as it is for me!&lt;/p&gt;

</description>
        <pubDate>Thu, 06 Mar 2014 00:00:00 +0100</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2014/03/06/yum-search-extended/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2014/03/06/yum-search-extended/</guid>
        
        <category>fedora</category>
        
        <category>yum</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>See you at GUADEC 2013!</title>
        <description>&lt;p&gt;&lt;a href=&quot;https://www.guadec.org&quot;&gt;GUADEC&lt;/a&gt; 2013 is around the corner.&lt;/p&gt;

&lt;p&gt;&lt;img src=&quot;https://blogs.igalia.com/jasuarez/assets/post_images/2013-07-22-guadec2013.png&quot; alt=&quot;GUADEC 2013&quot; class=&quot;center-block&quot; /&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href=&quot;http://www.igalia.com&quot;&gt;Igalia&lt;/a&gt; is kindly sponsoring my attendance, as well as other mates, to this
wonderful conference, where all years I meet good friends, and do new ones.&lt;/p&gt;

&lt;p&gt;I’ll be there from August, 1st to 5th, both included. On saturday 3rd I’ll give
a talk about &lt;a href=&quot;https://wiki.gnome.org/Grilo&quot;&gt;Grilo&lt;/a&gt;. If you are using Grilo or willing to, join us to the
talk. Of course, I always welcome any question, so if you see me and want to ask
anything, don’t hesitate to address me.&lt;/p&gt;

&lt;p&gt;Also, I expect to attend on 5th the &lt;a href=&quot;https://wiki.gnome.org/Hackfests/Music2013&quot;&gt;gnome-music BoF&lt;/a&gt;. gnome-music is one of the
programs I collaborate with that heavily use Grilo. I really suggest to give a
try. I’ts so nice!&lt;/p&gt;

&lt;p&gt;Besides all above, Igalia will have a booth during all the event, where we will
be showing some of the cool things we do. I still don’t know where it be exactly
located, but if you see us, come there!&lt;/p&gt;

</description>
        <pubDate>Mon, 22 Jul 2013 00:00:00 +0200</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2013/07/22/see-you-at-guadec-2013/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2013/07/22/see-you-at-guadec-2013/</guid>
        
        <category>grilo</category>
        
        <category>guadec</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
      <item>
        <title>Grilo 0.2.8 released</title>
        <description>&lt;p&gt;I did a new &lt;a href=&quot;https://mail.gnome.org/archives/grilo-list/2013-May/msg00017.html&quot;&gt;release of grilo plugins&lt;/a&gt; only one week after
the &lt;a href=&quot;https://mail.gnome.org/archives/grilo-list/2013-May/msg00012.html&quot;&gt;previous release&lt;/a&gt; because it includes a &lt;a href=&quot;https://bugzilla.gnome.org/show_bug.cgi?id=700517&quot;&gt;patch&lt;/a&gt; that people from
&lt;a href=&quot;https://live.gnome.org/GnomePhotos&quot;&gt;gnome-photos&lt;/a&gt; would like to see in next &lt;a href=&quot;http://www.gnome.org&quot;&gt;GNOME&lt;/a&gt; 3.9 pre-release.&lt;/p&gt;

&lt;p&gt;Besides that patch, this new release also includes a new plugin to get content
from the &lt;a href=&quot;http://magnatune.com&quot;&gt;Magnatune&lt;/a&gt; service. Credits go to &lt;a href=&quot;http://www.victortoso.com&quot;&gt;Victor Toso&lt;/a&gt;, who did a great job
on it.&lt;/p&gt;

&lt;p&gt;Happy weekend from &lt;a href=&quot;http://www.igalia.com&quot;&gt;Igalia&lt;/a&gt; headquarter!&lt;/p&gt;

</description>
        <pubDate>Sat, 25 May 2013 00:00:00 +0200</pubDate>
        <link>https://blogs.igalia.com/jasuarez/2013/05/25/grilo-plugins-0-2-8-released/</link>
        <guid isPermaLink="true">https://blogs.igalia.com/jasuarez/2013/05/25/grilo-plugins-0-2-8-released/</guid>
        
        <category>grilo</category>
        
        <category>multimedia</category>
        
        
        <category>GNOME</category>
        
        <category>Igalia</category>
        
      </item>
    
  </channel>
</rss>
