I’ve seen recently, lot of posts, articles, saying distributed CI is the future. Sorry guys, it’s the present.
When we look at evolution of tools, techniques, requirement, we saw the distributed tools having lot of success ? So, does build-master over design their build framework ? or do they already work distributed ?
In this article I’ll summarize why I think the CI is already distributed.
What’s distributed today ?
- Teams … yep, collocated team, became more and more uncommon. Industry requirements, specific knowledge make collocation of brains more and more difficult, even for Giant ….
- Source Repository, as soon as your team is distributed, you need to distribute your source repository. Of course, network speed, huge servers may give you the feeling you don’t need distributed source repository. But do you have really distributed administration ? does the team really use the power of your source repository ? Do you use pre/post build event ? how often they commit ?
- Dependencies … of course if the team are not co-located, your build process should be able to generate a partially your application, and download up-to-date (or the version you want) of required libraries. So shared directory, and this kind of stupid stuff, are difficult to sync across the world. Java community have understand that with artifacts repository that you can setup in your enterprise. Indeed, some language, provide similar functionality (perl,php,ruby, python, …) but how difficult it is to setup a proxy/mirror locally (to have the process under control).
- The build process, should allow dynamic download of dependencies. Does European, US, Indian, will download dependencies at the same location of course not ! Imagine in Visual Studio the ability to define a binaries repository for all your projects ? able to handle binaries not generated by visual ? or artefacts no generated by VS ….
- The programming languages are more and more different. Once upon a time, project were C or ++, Borland even Java. Now, some part are in Java, some in PHP, some in Ruby, ….. Even if you have only one programming language (C#as example), you may have to handle different version in your application ( .Net 1.0 up to .Net 4.0 as example). On large project, it’s may not make sens to keep all your libraries to the last level of your programming language. So it’s common today to have at least 3 programming languages, that you want to test on a least 5 major OS ( and if you want to test 32/64b, you double this number)
- You clients became more and more heterogeneous, and so, you test platform is more and more complex. If you write an application, you have to test it on at least 5 , 5 major OS, 5 major language, 5 major Web Browser, …. more than 125 combinations. Of course, if you minimized and combined some tests, you have at least 5 tests platforms.
- The two previous point, encourage the continuous integration framework to scale easily, and to distribute build on a farm of servers, and that, at the job level. Indeed, all your libraries, applications, doesn’t have the same constraints.
- Static code analysis, syntax checker. As your team are distributed, don’t have the same knowledge, all your programmers can not follow the same rules at the same time. So, every thing should be configurable by libraries or project.
- Reporting tools, as all the stuff is distributed, asynchronous, reporting tools should take that into account. Does it make sens to compare code coverage of a new project under ruby, and a legacy one in C ? Does it make sens to compare an experimented team in Java, and a team of beginners ?
So, if your build master take all these elements into account, instead of using centralized tools, you may end with a distrusted platform, allowing team to work at their speed, with their constraints, their knowledge, ….