LibrePlan makes easier to know the project status

If you follow closely the LibrePlan project, you will know that we are working in the development of the features that will be included in the LibrePlan 1.3 version, that is estimated to be released next April (you can look into the roadmap  here).

Among the things included in the roadmap, we regarded as very interesting to work in making the tool more intelligent by providing a set of indicators informing about the status of the project. At present, of course, you can also know the status of a project examining the planning and extracting the reports existing in LibrePlan. However, we thought that we could go one step further.

We realized that although monitoring and controlling the project plan can be done by the project manager quite fast and easy, there is a user role, different from the project manager, that is also very interested in the status of the company projects. This user can be defined as an employee with a chief position in the organization hierarchy. For instance, the CEO of a company can be a good prototype of this sort of profile.

This profile has some characteristics that make him different from the project manager role:

  • The CEO is a user with less project management knowledge than the project manager and, therefore, has more difficulties in analyzing the project Gantt, in interpreting correctly the progress measurements or in applying project management techniques like the EVM (Earned Value Management) and the Monte Carlo simulation implemented in LibrePlan.
  • The CEO is a user whose main duties are not related with project management and, because of this, he has less time available to follow the day a day of the projects opened in the company.
  • Although the CEO has both less project management knowledge and less time to devote to it, he is interested in knowing how well or bad is going a project to make executive decisions if required.

So, taking into account the above points, we assessed that for this kind of chief employee could be very useful a set of metrics, usually called KPI (Key Performance Indicators). Project management KPIs measure how well a project is performing according to its goals and objectives, i.e., to finish on time and with the expected cost.

KPIs are perfect for the CEO users because they have three properties that satisfy the needs and use pattern of these executive users:

  1. They sum up information. They gather planning data and through calculations provide a panoramic view of the situation of a project according to the specific goal aspect they are are designed to measure.
  2. They are easy to understand. They do not require a lot of project management background to be read. Besides, in LibrePlan they can be merged to provide a single verdict about a project.
  3. They are fast. The user is not required to spend much time with the project plan to be able to get a view about the status of the project.

I would like also to highlight that, although they are very important for the chief employees, the KPIs are also very helpful for the project managers and all the people taking part in the planning because they save time and provide and a good picture of the status of the project at any moment.

The KPIs will be displayed in LibrePlan 1.3 in a screen of the project planning that will be called the dashboard. With this name we are drawing an analogy with physical dashboards present in complex machines like, for example, a plane where the pilots have a flying deck with a bunch of sensors monitoring any single aspect of the flight. In the same way, in the LibrePlan dashboard, the person in charge of the planning will be able to look at a set of numerical data and charts that will help him to bring the project to fruition.

We have been studying which KPIs to implement to launch the first version of the dashboard and the principles we have followed in the research have been two: to cover the relevant aspects of the status of a project and, second, to maximize the value added to the program.

Once concluded this investigation process, the result has been the identification of four dimensions and a set of KPIs per dimension. Besides, according to this four dimensions, we designed the layout of the dashboard divided in four areas, each one containing the KPIs belonging to it inside.

The dimensions and KPIs are the next ones:

Progress

This dimension measures which is the progress degree of the project, i.e, work already done versus work remaining to do to close the project. KPIs:

  • Global progress chart. It will sum up the current global progress of the project and will show the theoretical value the project progress should have if all things went as expected.
  • Task status chart. It will show the number of tasks finished, ready to start, blocked by a previous dependent task, etc..

Time

This area will show how well the project is performing in time according to deadlines and other time commitments. KPIs:

  • Task completion delay histogram. It will show an histogram chart with the number of days the tasks of the project are finishing ahead of time or after the planned end date.
  • Deadline violation KPI. Pie chart with the tasks which have not hit the deadline, the tasks which have hit it and the tasks without a configured one.
  • Margin with project deadline. Number of days the project finishes after or before the configured project deadline.

Resources

This dimension will do an analysis of the resources being allocated in the project. KPIs:

  • Estimation accuracy histogram. It will be an histogram with the deviation between the hours planned and the hours finally devoted by the company resources to the tasks of the project.
  • Overtime ratio. It will show how much overtime the resources allocated to the project are having.

Cost

This area will include some metrics belonging to the EVM technique. These metrics are function of time and in this area will be shown calculated at the current date. KPIs:

  • Cost Variance. It will be the difference between the BCWP (Budgeted Cost Work Performed) and the ACWP (Actual Cost Work Performed). It says how much we are losing or winning regarding to the estimated cost planned.
  • Cost Performance Index. It informs about the current rate of win/loss value per time unit.
  • Estimated as Completion (EAC). It is a projection that estimates which will be the final project cost at completion.
  • Varience at Completion. It is a projection of the estimated benefit or loss at completion time.

And finally, as a picture is worth a thousand words, although the dashboard is work in progress, I would like to include here a snapshot of some KPIs mentioned above that the LibrePlan team is implementing currently.

KPI snapshot
KPIs development (work in progress)

Besides, as we usually do, if you want to share with us your ideas or requests about what KPIs you miss or things that you regar as important for future, just let us know about it using the communication resources we have available in LibrePlan.

LibrePlan visits Brazil

Igalia, the company I belong to and which supports my work in LibrePlan, takes part in a trade mission to Brazil between September 25th and October 2nd. The mission will be focused on the city of São Paulo, which is the most important financial center in the country and one of the biggest cities all over the world. It has a population of 11 million people and, including the metropolitan area, it reaches 20 million, numbers which are really amazing.

São Paulo city view at night

I will be the person representing my company in this trip and will let me be in the southern hemisphere for the first time in my life. There I plan to check by myself if it is a myth that water swirls counter-clockwise in the toilets and sinks, contrary to what happens in northern hemisphere.

Aside from satisfying my personal curiosity ;), my purpose during my time there will be to present and explain LibrePlan to everybody interested in the project . I would be really happy if some Brazilian free software firms and other software technology providers got involved in our community. We want LibrePlan to be the reference free software planning tool and to have as many companies and private individuals as possible using, installing, collaborating and taking care of the program.

During next week I will be preparing and closing the details of the timetable of meetings with interested contacts. So, regarding to this, if you are reading this post, your are in São Paulo or nearby and want to know more about LibrePlan, please, contact me sending an e-mail to jmoran {at} igalia {dot} com. We can meet up and talk.

Vemo-nos lá! (See you there!)

Bringing Functional tests to NavalPlan (LibrePlan)

Chasing Quality

One of the maxims we try to follow in NavalPlan (LibrePlan) is to create a project with good quality.

Quality in software refers to two different notions:

  • Functional quality. It is the degree in which a software satisfies the specifications. The better the software complies with them, the higher the quality a program is.
  • Structural quality. It is related to all the non-functional requirements which can be stated over a program. For instance, how good is the development cycle, how maintainable the source code is, what performance is achieved, etc.

Said that and taking into account this classification, I would like to introduce automated web tests and relate them to this taxonomy.

In the first place, I will define what they are for those of you not familiarized with them. In short, we can say that automated web tests are black box tests in which the interface of a web application is tested in an automatic way. In other words, they are a type of tests in which a program performs the role of a real user and interacts with web pages with the aid of a browser to assure that the behavior of a web application is the one expected.

In second place, as I told you, I would like to link them with quality. In general, we can say that they provide structural quality because, on the average, a web application with functional tests has a higher quality than one application without them. They help to detect failures and regressions and, therefore, in the end, the likelihood of having bugs is smaller.

Sahi Web Tests

In the NavalPlan team we have been looking for the best alternative to do automated web tests. Apart from the general cited reason of having higher quality, we try to address the jointly effect of having a large-featured application and a short-numbered testing team. When these two factors are combined the likelihood of regressions is big and the cost of a comprehensive manual test of the application high. Therefore, for sure, with a good set of web tests we would improve both in robustness and productivity allowing us to plan less testing time.

After looking for several alternatives, the technology that we chose is Sahi and the reasons which supported our decision are the next ones:

  1. In NavalPlan we use the web interface framework ZK. This is a framework which generates dynamically the id attribute of the HTML entities which make up the web pages. This makes difficult to develop automated tests because the id is one of the easiest ways to locate HTML entities in the DOM and, some of the most common testing frameworks, like Selenium, relies basically on them. However, as they are dynamic in ZK, i.e., each time a page is rendered they are different, it is impossible to make the tests being repeatable with a technology based on the ids. Luckily, Sahi allows to overcome this situation because it has a powerful accessor API which helps to locate HTML elements using concepts like indexes, human DOM relationships as can be near or parent, CSS classes, etc.
  2. Sahi is browser independent. This means that the automated tests can be executed in several browsers. This is great, because a RIA application like NavalPlan uses the latest HTML technologies and some of them might be not fully supported in a particular browser. We can run the tests in all of them and this is a big advantage for us.
  3. Tests are programmed in JavaScript what in my opinion is a great idea. To start with, because JS is the language used by browsers since the very beginning and is a standard with a good API to interact with the DOM. Another good feature is that, because of being the tests written in a programming language, we have the programming tools like functions, data types, control structures… which gives you the highest flexibility to build tests as complex as you need. Some other testing technologies relies on configuration files, like XML files, and this limits a lot the possibility to get off the path of what the web test framework developers initially thought.

Now I will focus on the things I would like to be different in Sahi. Among them, I would highlight that there is a proprietary product (Sahi Pro) built on top of the Sahi Open Source. Sahi Pro provides the more advanced features and I really miss that some of them were in the open source product. For example, a better report system. For me it would be nice that they had an open source license for the Sahi Pro product to be used with free software products like NavalPlan.It is a way to promote both quality in open source and open source itself without damaging the comercial interests of a company supporting a free software product.

Where are we ?

We started developing Sahi tests last month and, at present, we have tests for some of the simpler use cases, which are CRUD use cases related to administrative operations. If you feel like having a look to how they work, I encourage you to deploy NavalPlan, to download the git repository and to read the README file in the script/functional-tests folder where the instructions to run them are described.

Additionally, they say that tests are successful when they detect errors and, in this sense, we proudly 😉 can say that right now we have reported some new bugs on the bugzilla thanks the functional Sahi tests developed so far.

Where are we going ?

Our roadmap concerning web tests will consist of increasing the coverage and facing up more complex interface operations in the near feature. After it,
a last final desired scenario will consist of having a platform in which:

  1. We develop a Maven plug-in or write a configuration to be able to pass the tests integrated in the building process as part of the Maven test phase.
  2. In NavalPlan we use CI and the continuous integration server we have is Hudson. It would be great to integrate the Sahi tests execution in Hudson build cylce and to have the test results published in the Hudson interface to find them easily.

I attend London DDD exchange 2011

Next June, the 10th, I will be attending the DDD exchange 2011 in London. DDD exchange is an event to learn and share experiences about using Domain Driven Design and it is the 4th annual edition. There I hope to meet up with people interested in this topic and to know first-hand information about professionals applying this way of designing software. So, if you are comming, I see you there!

I liked very much the concept of DDD I learnt by reading the book of the same title Domain Driven Design, written by Eric Evans who, by the way, will give a keynote at the exchange. To sum up very briefly, DDD is based on the next three points extracted from wikipedia article:

  • Placing the project’s primary focus on the core domain and domain logic
  • Basing complex designs on a model
  • Initiating a creative collaboration between technical and domain experts to iteratively cut ever closer to the conceptual heart of the problem.

Domain Driven Design

I have being taking part in NavalPlan from the beginning doing roles of project management and analysis and, in the team, we use part of the practices and ideas of DDD. My opinion is that the result after this experience has been good and I would recommend to use DDD in applications which require quite amount of business logic, like NavalPlan.

Finally, any trip is an occason to meet people living in other regions who share interests with you. So, if you are in London in June, the 10th, and want to know more about NavalPlan, please, let me know 🙂

NavalPlan class diagrams

Tomorrow a NavalPlan development course for AGASOL companies will start in Santiago de Compostela. The aim of the course is to spread our community having more people involved in the project.

Basically the course will be divided in two parts:

  • User part. During it the main functionalities of the application will be explained.
  • Development part. In this part example use cases will be developed covering the different technologies used in the project and examining  the architecture from top to bottom.

In order to introduce the development part I wrote several class diagrams with the main entities of the application. I composed them using the tool Linguine Maps , about which I talked in my previous blog post NavalPlan Domain Model diagram.

I prepared some slides with these UML class diagrams and I uploaded them to the files section of the project in SourceForge.net. I think that they can be useful not only for the course but as technical documentation to all the people interested in NavalPlan.

Click here to download the material.

NavalPlan Domain Model diagram

NavalPlan is a Java application built using Object Oriented Programming and whose data are stored in a relational database – currently PostgreSQL and MySQL are the RDBMS supported and tested-. To map objects from the objects models to relational tables the Hibernate ORM is used, which is a well-known framework in Java platform widely deployed.

In the business layer, it is applied the Domain Model architecture pattern which briefly consists of having rich business objects which encapsulate in the same class the data and the behavior related to them. I think that it was a good decision and I am happy with it. Among other advantages, it allows to organize complex behavior and to reuse business logic easily.

The NavalPlan domain model is large and complex, and one of the problems is its maintenance from documentation point of view. This is so because to have UML diagrams updated is time consuming.

To address the problem above and to have some nice diagrams to teach NavalPlan to new developers, I have being playing today with a tool to infer UML class diagrams from the Hibernate mapping files. With a tool like this a lot of time would be saved !!

The tool I found is Linguine Maps. You have to develope a program to use it but, after resolving some configuration issues, I got a quite good result. Therefore, we will use it in NavalPlan from now on.

With Linguie Maps you can configure several things and create diagrams just with the classes you are interested in. As an example and to view a whole picture of NavalPlan I generated a chart with all the classes. I know that maybe it reminds you of an Indian war with so many arrows, but I think that it could be got a good poster from it 😉



Procedures to measure project progress (I)

The purpose of this post is to start a set of blog entries to share my thoughts about ways to measure progress in project planning and to explain methods about how to do it.

Delimiting the problem

The field I want to talk about is the progress measurement in projects which are represented by Gantt charts. These projects consist of a set of activities with logical dependencies among them  and which are done by resources, that can be people or machines. Besides, resources can be over-allocatable.

The aim of the planning is to fulfill the project deadline, to have a cost lower than the budgeted money and assess if it is possible to carry out the project with the available company resources. This job is complex and is usually aided by the use of planning, monitoring and controlling project management tools.

In my opinion, on measuring progress it is important to distinguish two levels:

  • Task level. It is the most common analysis scope and consists of measuring the progress inside tasks. Although there are several possibilities, to put it simple, it will be supposed that it consists of specifying the work already finished in the task in which the measure is being taken.
  • Project level. In this level the project is taken into account as a whole, with all its tasks, to know how it globally goes. It is a scope which is not as studied as the task level one and, therefore, is less known by people too. I will use the term global progress to call to this type of progress as well.

Having said that, I will focus on the measurement of progress at project level. In this area, the key is to answer the next two questions: is the project delayed or ahead of its planning at the present time? how much?

The project manager needs to know the answers to those questions because they allow him to make decisions. For instance, if there is a certain accumulated delay in a project, the project manager could order to devote more resources in order to recover the delay and arrive on time.

Progress at project level by doing a weighted addition of tasks

This is the method I will explain in this post and is one of the methods that maybe first comes to your mind if you think about this problem. I will explain it by answering two questions which are mandatory if you define a method to measure progress at project level.

Which tasks contribute to calculate the project progress?

In the method I propose all the tasks are considered. The rationale of this answer is that all the tasks which make up a project  are important and, therefore, all of them must contribute to the global progress.

How is the contribution of each task ?

The thing here is to make the decision about the way in which each task influence the global progress. Two strategies can be thought:

To use the average

The principle which supports this option is equality. It sets that contribution is the same for all tasks.

To use a weighted average.

It implies to break the equality principle and to set that contribution to the global progress is not the same for each activity. There are many ways to establish equality and, regarding to the global progress calculation, the point is to identify the task feature(s) that makes them different. So, which are these features ? Well, from my point of view, the feature which distinguishes them concerning to progress is the amount of work a task consists of. This is so because, having tasks with different amount of hours causes that, for the same progress value at task level, the amount of remaining work to be completed per task was different.

The bad effect of ignoring the different amount of remaining work of each task can be seen in the next example easily: If we have a two tasks project, one having 100 work hours and another 10 work hours and we receive a progress of 10% in the firs task and 90% in the second one, we get the following global progress value using average: ( 10 + 90 ) / 2 = 50%. Half of the project is already done according to the global progress, but, however, the remaining work to finish is much more than 50%.

To correct the behavior described, the method I propose is to add progresses per task weighing the progress value of each task by the hours percentage the task represents with regard to the  project total hours. With the average calculation the project manager thinks that project goes very well when reality is not so good but, however, using the weighted average, he sees a value which approaches project state better, he gets a global progress value of 10*(100/110) + 90*(10/110) =  17,27%.

Finally,  I would not like to end up without stressing that  in NavalPlan users can  measure global progress choosing among several options and the method explained here is one of the alternatives.

A taxonomy problem in project management

Project management according to PMI is the application of knowledge, skills, tools and techniques to project tasks to meet the project requirements. It is made up of a set of activities which can be grouped in five areas:

  1. Initiating
  2. Planning
  3. Executing
  4. Monitoring and Controlling
  5. Closing

Different applications can be used to help to carry out the activities above and they are usually called project management applications.

Throughout my experience, however, the previous name is not the most suitable one because it causes two undesirable situations.

The first one relates to the fact that the inclusion level is too broad. It is too general to say that a program is a project management application.

The second situation happens when the use of the term causes confusion. Maybe people taking part in the communication attribute different meaning to that software category. There are two reasons which explain this fact from my point of view:

  • I do not know programs which cover all the process areas and, in case they exist, they are a minority and are not widely spread.
  • Project managers use a set of programs which cover just some of the mentioned areas (but not all of them).

Therefore, if we take into account both things, we find scenarios where people use the term project management to mean qualitatively different applications and that is the cause of the confusion.

The solution I suggest to avoid these problems is to use the process area name(s) to categorize applications and add this prefix to the project management term. Besides, many times this last part will be implicit and not necessary. As an example, we can say that NavalPlan covers (2) and (4) areas and, thus, would be a planning, monitoring and control tool.

Finally, I would like to end up with a last perception about applications which have among their features:

  • Bug tracking.
  • Time tracking.
  • Wiki
  • Calendar
  • Document management
  • etc

These applications are very common and I do not know a specific term for them, apart from project management. They are used for coordination, for resource collaboration, etc during the execution of certain type of projects.

Having said that, the idea I want to share, is that, according to my proposal, I would call them executing project tools, (3) area.

I will go in depth about project management tools categorization in a later blog entry.