I’ve not post for some time, i was quite bust to set up some CI / metrics tools on a new project.

This raise me some questions / comments.

1) Complexity of the tools

When you read (even in my posts) it’s simple …  I’ve forget a bit the ‘open community’ and in its bad aspects.

Let me illustrate a bit. I am working in a team using mainly Java / Php.

So, using standard tools :

  • Source revision control : svn
  • IDE : eclipse
  • Build : Maven
  • CI : hudson

So, up to now, everything looks fine. I used to use some static code anlysis, in order to know the code quality :

Jdepend, PMD, findbugs, Javancss, cobertura, sonar, Coverity, …

So, nothing really exotic/fancy and most of them are ‘classical’ static code analysis tools.

So,  doing  my ‘Yoda’ show, i’ve encourage the team to use them as soon as possible in their development process so in the IDE.

I’ve plug them in their pom file (maven configuration file), update my hudson installation to have the trend, ….

AND … you know what …. 6 tools,  16 versions needed, because of incompatibility in maven, hudson,  pmd, ….

1 tool , 4 plugin needed …. most of the time just to parse an XML file, or to draw a simple curve with linear data …..

What’s a mess ! where is simplicity ? easy to use ? why coupling a CI tools with build specific plugin ??? why mixing build tools with Source control tools ???

Why not keeping the principle of less coupling to build/CI/static code analysis tools ????

Doing dcode is important, but application design should not be forget.

2) the second point, is some comments : it’s expensive to fix issues …

How can someone justify that fixing an issues at developing time … 5 .. 10 minutes … is more expensive that discovering the issue in production (something like 1$ a minute ? ) , requiring some dev again, some QA, some deployment …..

Of course, the bug may never be seen in production …. for how long ?  Don’t forget the Murphy law : if you can have a problem, you’ll have it !