{"id":36,"date":"2007-02-13T11:26:36","date_gmt":"2007-02-13T10:26:36","guid":{"rendered":"http:\/\/blogs.igalia.com\/itoral\/?p=36"},"modified":"2007-02-13T11:26:36","modified_gmt":"2007-02-13T10:26:36","slug":"howto-integrate-unit-tests-with-buildbot","status":"publish","type":"post","link":"https:\/\/blogs.igalia.com\/itoral\/2007\/02\/13\/howto-integrate-unit-tests-with-buildbot\/","title":{"rendered":"Howto: Integrate unit tests with buildbot"},"content":{"rendered":"<p>After <a href=\"http:\/\/blogs.igalia.com\/itoral?p=29\">my latest post about unit tests integration with buildbot<\/a> some people asked me how I&#8217;ve done it, and got interested in setting up something like that for their own projects. I hope this post helps with this. <\/p>\n<p>Here we go:<\/p>\n<p><b>1. Setup Automake unit tests for your project<\/b><\/p>\n<p>The first step is to provide a set of <a target=\"_blank\" title=\"Automake support for tests\" href=\"http:\/\/www.gnu.org\/software\/automake\/manual\/html_node\/Tests.html\">Automake compliant unit tests<\/a> for the project. In my case, I used simple tests, so I just needed to define in my <em>Makefile.am<\/em> something like this:<\/p>\n<pre><code>\nif HAVE_CHECK\n    TESTS = gtkbutton \n            gtkentry \n            ...\n    TESTS_ENVIRONMENT = CK_FORK=yes\n    CK_VERBOSITY=verbose\nelse\n    TESTS =\nendif\n<\/code><\/pre>\n<p>By defining the <em>TESTS<\/em> variable I&#8217;m instructing <em>Automake<\/em> to create a <em>&#8220;check&#8221;<\/em> target for make, that will build and execute each of the programs listed. For each test program, <em>Automake<\/em> will assume that an exit code 0 means successful, while any other exit code means failure. Once a tests is executed, Automake prints to stdout a message like:<\/p>\n<p><code>PASS: [test]<\/code><br \/>\n<code>FAIL: [test]<\/code><\/p>\n<p>as appropriate. This is important, because I&#8217;ll use this later on, when setting up <em>buildbot<\/em>, to detect the passed and failed tests.<\/p>\n<p>The <em>HAVE_TESTS<\/em> variable should be set at configure time, after checking we have all the tools we need to build and run the tests, so we can disable the tests in case the systemt does not provide the necessary stuff. We&#8217;ll see how to do this in the next section.<\/p>\n<p><b>2. Adding unit tests to your project<\/b><\/p>\n<p>Ok, now we have to provide an implementation for all those tests. You can use any unit testing framework to implement them (of course, you can decide not to use any framework too, it is up to you).  I have used <a href=\"http:\/\/check.sourceforge.net\">Check<\/a> for my tests, so I&#8217;ll explain how to set up autotools to use it properly. In case you use use a different tool, a similar setup should be needed.<\/p>\n<p>First, you need to define in your Makefile.am all the stuff needed to build these test programs, for example, in my case, I added something like:<\/p>\n<pre><code>\nDEPS =                                                                                  \n    $(top_builddir)\/gdk-pixbuf\/libgdk_pixbuf-$(GTK_API_VERSION).la   \n    $(top_builddir)\/gdk\/$(gdktargetlib)                                           \n    $(top_builddir)\/gtk\/$(gtktargetlib)\n\nINCLUDES =                                        \n    -I$(top_srcdir)                                \n    -I$(top_builddir)\/gdk                         \n    -I$(top_srcdir)\/gdk                           \n    -I$(top_srcdir)\/gtk                            \n    -DGDK_PIXBUF_DISABLE_DEPRECATED   \n    -DGDK_DISABLE_DEPRECATED              \n    -DGTK_DISABLE_DEPRECATED              \n    $(GTK_DEBUG_FLAGS)                         \n    $(GTK_DEP_CFLAGS)                           \n    $(CHECK_CFLAGS)\n\nLDADD =                                                                                \n    $(CHECK_LIBS)                                                                   \n    $(top_builddir)\/gdk-pixbuf\/libgdk_pixbuf-$(GTK_API_VERSION).la  \n    $(top_builddir)\/gdk\/$(gdktargetlib)                                          \n    $(top_builddir)\/gtk\/$(gtktargetlib)\n\ngtkbutton_SOURCES =    \n    check-main.c           \n    check-utils.c            \n    check-gtkbutton.c\n\n...\n<\/code><\/pre>\n<p>$(CHECK_LIBS) and $(CHECK_CFLAGS) are provided by <em>Check<\/em> at configure time, providing the neccessary libs and flags to compile <em>Check<\/em> based tests. <\/p>\n<p>There is only one thing missing, that is to detect whether the system has <em>Check<\/em> installed. As I said before, we have to check this at configure time, so add this lines to your <em>configure.in<\/em> script:<\/p>\n<pre><code>\nAM_PATH_CHECK([0.9.2-4],[have_check=\"yes\"],\nAC_MSG_WARN([Check not found; cannot run unit tests!])\n[have_check=\"no\"])\nAM_CONDITIONAL(HAVE_CHECK, test x\"$have_check\" = \"xyes\")\n<\/code><\/pre>\n<p>The <em>AM_PATH_CHECK<\/em> macro is provided by <em>Check<\/em> (you might want to add it to your acinclude.m4 file), and is used here to ensure that an appropriate version of Check is installed, setting <em>HAVE_CHECK<\/em> to True in case it is (and thus, enabling the build of the <em>Check<\/em> based tests defined in <em>Makefile.am<\/em>).<\/p>\n<p>Now, re-run your <em>autogen.sh<\/em> and <em>configure<\/em> scripts. If all goes well you should be able to run <em>&#8220;make check&#8221;<\/em> to execute your tests:<\/p>\n<pre><code>\niago@venus:[\/home\/iago\/tests\/gtk+\/ut]# make check\nmake  check-TESTS\nmake[1]: Entering directory `\/home\/iago\/tests\/gtk+\/ut'\nRunning suite(s): GtkButton\n[...]\n100%: Checks: 7, Failures: 0, Errors: 0\ncheck-gtkbutton.c:144:P:new_with_label:test_new_with_label_regular: Passed\ncheck-gtkbutton.c:202:P:new_with_mnemonic:test_new_with_mnemonic_regular: Passed\ncheck-gtkbutton.c:251:P:new_from_stock:test_new_from_stock_regular: Passed\ncheck-gtkbutton.c:290:P:set_get_label:test_set_get_label_regular: Passed\ncheck-gtkbutton.c:313:P:set_get_label:test_set_get_label_invalid: Passed\ncheck-gtkbutton.c:349:P:pressed_released:test_pressed_released_clicked_regular: Passed\ncheck-gtkbutton.c:359:P:pressed_released:test_pressed_released_clicked_invalid: Passed\n<font color=\"blue\">PASS: gtkbutton<\/font>\nRunning suite(s): GtkEntry\n[...]\n<\/code><\/pre>\n<p>Do you see the blue line? That&#8217;s an <em>Automake<\/em> output. The lines above that one are <em>Check<\/em> output stating the result for each unit test executed.<\/p>\n<p><b>3. Setting up buildbot to build and test your project<\/b><\/p>\n<p>Next, you need to install buildbot and configure it to build your project(s). I&#8217;ll assume you&#8217;ve already done this, but if you haven&#8217;t yet, you can follow chapter 2 of this manual:<\/p>\n<p><a target=\"_blank\" title=\"Buildbot manual\" href=\"http:\/\/buildbot.sourceforge.net\/manual-0.7.5.html\">http:\/\/buildbot.sourceforge.net\/manual-0.7.5.html<\/a><\/p>\n<p>It is very easy, really.<\/p>\n<p>Once the above is done, we need to add the build step that will take care of testing. To do this, in the master setup of your project, edit the <em>master.cfg<\/em> file. Go to the <em>Builders<\/em> section, where you configured the build phases of your project, it might look more or less like this:<\/p>\n<pre><code>\nf = factory.BuildFactory()\nf.addStep(SVN, svnurl=projecturl)\nf.addStep(step.ShellCommand, command=[\"make\", \"all\"])\nf.addStep(step.ShellCommand, command=[\"make\", \"install\"])\n<\/code><\/pre>\n<p>Now, let&#8217;s add a new step which will take care of the testing:<\/p>\n<pre><code>\nf.addStep(step.ShellCommand, command=[\"make\", \"check\"])<\/code>\n<\/code><\/pre>\n<p>You can reboot <em>buildbot<\/em> now to see how it works. Once <em>buildbot<\/em> finishes to build the project you can see that all you get for the phase that takes care of the tests is a plain text log with the <em>&#8220;make check&#8221;<\/em> command stdout. Let&#8217;s now see how we can get a better report.<\/p>\n<p><b>4. Adding the tests HTML report<\/b><\/p>\n<p>To get the HTML report I showed in my latest post I created a new customized build step class, inheriting from steps.shell.ShellCommand, which is a base class for shell based commands. This new class will be specialized for &#8220;make check&#8221; commands: <\/p>\n<pre><code>\nclass TestCommand(steps.Shell.ShellCommand):\n    failedTestsCount = 0\n    passedTestsCount = 0\n    testsResults = []\n\n    def __init__(self, stage=None,module=None, moduleset=None, **kwargs):\n        steps.shell.ShellCommand.__init__(self, description=\"Testing\",\n                                                      descriptionDone=\"Tests\", \n                                                      command=[\"make\", \"check\"], **kwargs)\n        self.failedTestsCount = 0\n        self.passedTestsCount = 0\n        self.testsResults = []\n        testFailuresObserver = UnitTestsObserver ()\n        self.addLogObserver('stdio', testFailuresObserver)\n\n    def createSummary(self, log):\n        if self.failedTestsCount &gt; 0 or self.passedTestsCount &gt; 0:\n            self.addHTMLLog ('tests summary', self.createTestsSummary())\n\n    def getText(self, cmd, results):\n        text = steps.shell.ShellCommand.getText(self, cmd, results)\n        if self.failedTestsCount &gt; 0 or self.passedTestsCount &gt; 0:\n            text.append(\"tests failed: \" + str(self.failedTestsCount))\n            text.append(\"tests passed: \" + str(self.passedTestsCount))\n        return text\n\n    def evaluateCommand(self, cmd):\n        if self.failedTestsCount &gt; 0:\n            return WARNINGS\n        else:\n            return SUCCESS\n\n    def createTestsSummary (self):\n            # Create a string with your html report and return it\n            ...\n<\/code><\/pre>\n<p>The most interesting stuff is in the <em>__init__<\/em> method, where we create an observer <em>(UnitTestsObserver)<\/em> for the stdout log. This means that each time that a new line is output to stdout, that observer will be warned, so it can process it. <\/p>\n<p>The <em>getText<\/em> method provides the text that is shown in the phase box of the <em>Waterfall<\/em> view of the project. In this case it will show the number of passed and failed tests.<\/p>\n<p>The <em>createSummary<\/em> method is used to add additional information (for example,  extra logs). In this case I use this method to link a new log with the html summary of the tests done.<\/p>\n<p>The <em>evaluateCommand<\/em> method is called when the <em>&#8220;make check&#8221;<\/em> command finishes, to decide the final status of the phase. In this case I set the status to &#8220;WARNING&#8221; (orange color in Waterfallview) when there are failed tests, or SUCCESS otherwise. I could set it to FAILURE if there are failed tests, but I decided not to flag a build as FAILED when there are failed tests.<\/p>\n<p>Finally, the <em>createTestsSummary<\/em> method is used to generate the HTML with the tests summary that is being linked in  <em>createSummary<\/em>. In this method you must create and return a string with the HTML page contents.<\/p>\n<p>Ok, so as we&#8217;ve seen the main stuff here is the log observer, which will be responsible for parsing and extracting all the interesting information from stdout in order to provide the data we need to generate the results (passed and failed tests). Let&#8217;s see how I implemented it:<\/p>\n<pre><code>\nclass UnitTestsObserver(buildstep.LogLineObserver):\n    regroupfailed = []\n    regrouppassed = []\n    reunittest = []\n    unittests = []\n\n    def __init__(self):\n        buildstep.LogLineObserver.__init__(self)\n        if len(self.regroupfailed) == 0:\n            self.regroupfailed.append((re.compile('^(FAIL:) (.*)$'), 1))\n        if len(self.regrouppassed) == 0:\n            self.regrouppassed.append((re.compile('^(PASS:) (.*)$'), 1))\n        if len(self.reunittest) == 0:\n            self.reunittest.append((re.compile('^([^:]*):([^:]*):([^:]*):([^:]*):([^:]*):([^:]*).*$'), 4, 5))\n\n    def outLineReceived(self, line):\n        matched = False\n        for r in self.regroupfailed:\n            result = r[0].search(line)\n            if result:\n                self.step.failedTestsCount += 1\n                self.step.testsResults.append((result.groups()[r[1]].strip(), False, self.unittests))\n                self.unittests = []\n                matched = True\n        if not matched:\n            for r in self.regrouppassed:\n                result = r[0].search(line)\n                if result:\n                    self.step.passedTestsCount += 1\n                    self.step.testsResults.append((result.groups()[r[1]].strip(), True, self.unittests))\n                    self.unittests = []\n                    matched = True\n        if not matched:\n            for r in self.reunittest:\n                result = r[0].search(line)\n                if result:\n                    err_msg = result.groups()[r[2]].strip()\n                    if err_msg == \"Passed\":\n                        self.unittests.append((result.groups()[r[1]].strip(), True, err_msg))\n                    else:\n                        self.unittests.append((result.groups()[r[1]].strip(), False, err_msg))\n                    matched = True\n<\/code><\/pre>\n<p><em>regroupfailed<\/em> and <em>regrouppassed<\/em> are lists of regular expressions that match failed and passed tests. In my case, because I&#8217;m using <em>Automake,<\/em> I know that failed tests output a <tt>FAIL: [testname]<\/tt> to stdout while passed tests output <tt>PASSED: [testname]<\/tt>, so I added regular expressions to match these cases. This provides integration with <em>Automake<\/em>. <em>reunittest<\/em> is a list of regular expressions that match <em>Check<\/em>&#8216;s output for each unit test executed. When <em>Check<\/em> is used in verbose mode it prompts, for each unit test done, a line like this one to stdout:<\/p>\n<p><code>check-gtkfilechooser.c:80:P:set_get_action:test_set_get_action_regular: Passed<\/code><\/p>\n<p>In this example, <em>test_set_get_action_regular<\/em> is the name of the unit test, and the last component is &#8220;Passed&#8221; if the test was successful or an error message otherwise.Thus, I added to the list a regular expression to matches such lines and extracts the interesting information from them.<\/p>\n<p>Because <em>Automake<\/em> does not print its output until all the unit tests of the test program are done, I do not know which test program the unit tests belong to until I get the <em>Automake<\/em> output. That&#8217;s why I keep the matched unit tests in the <em>unittests<\/em> variable until I match an <em>Automake<\/em> passed\/failed line (at that moment, I add all the unit tests matched to that test program and reset the unittests variable).<\/p>\n<p>After processing the entire stdout log, the <em>testsResults<\/em> attribute of the <em>TestCommand<\/em> instance will provide a list with one element per test done. If we name one of those elements as &#8216;t&#8217;, then:<\/p>\n<ul>\n<li>t[0] is the name of the test program.\n<li>t[1] is True if it is a passed test or False otherwise.\n<li>t[2] will be a list with all unit tests executed for that this program.\n<\/ul>\n<p>If we name an element of t[2] as &#8216;u&#8217;, then:<\/p>\n<ul>\n<li>u[0] is the name of the unit test.\n<li>u[1] is True if it is a passed unit test or False otherwise.\n<li>u[2] is a string containing the string &#8220;Passed&#8221; for passed tests or the error message for failed tests.\n<\/ul>\n<p>This is all the information that we need to write the HTML report in the <em>createTestsSummary<\/em> method.<\/p>\n<p>Ok, we are almost there, now we just need to replace in our <em>master.cfg<\/em> file the testing phase we added before by:<\/p>\n<pre><code>\nf.addStep(TestCommand)\n<\/code><\/pre>\n<p>In summary, I&#8217;ve built a custom build step inheriting from <em>steps.shell.ShellCommand<\/em>. This custom step will just execute a <em>&#8220;make check&#8221;<\/em> command. I also  rewrited some methods to customize the information reported once the command is finished. I used the <em>createSummary<\/em> method to link an HTML log with a personalized tests summary. In order to get all the information that I need to create all these methods (information about the passed and failed tests), I added an observer to the stdout log, that parses each line output by the command execution to stdout looking for <em>Automake<\/em> or <em>Check<\/em> messages and storing the relevant information for a later usage.<\/p>\n<p>&#8230; and that&#8217;s all. I really hope this helps you.  If you have any suggestion for improving this, I&#8217;ll very glad to know!<\/p>\n<p><em>Final note: because I use jhbuild to build gtk+ and its dependencies from buildbot, the code above is not exactly the same I&#8217;m using, so it is possible that I missed something or made mistakes. If you find any mistake, please, let me know and I&#8217;ll fix it.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>After my latest post about unit tests integration with buildbot some people asked me how I&#8217;ve done it, and got interested in setting up something like that for their own projects. I hope this post helps with this. Here we go: 1. Setup Automake unit tests for your project The first step is to provide &hellip; <a href=\"https:\/\/blogs.igalia.com\/itoral\/2007\/02\/13\/howto-integrate-unit-tests-with-buildbot\/\" class=\"more-link\">Continue reading<span class=\"screen-reader-text\"> &#8220;Howto: Integrate unit tests with buildbot&#8221;<\/span><\/a><\/p>\n","protected":false},"author":16,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-36","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/posts\/36","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/users\/16"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/comments?post=36"}],"version-history":[{"count":0,"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/posts\/36\/revisions"}],"wp:attachment":[{"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/media?parent=36"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/categories?post=36"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.igalia.com\/itoral\/wp-json\/wp\/v2\/tags?post=36"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}