Thursday, 5 September 2013

Strive for continuous improvement, instead of perfection.


Kim Collins was perhaps thinking more about physical improvement but his advice holds well for software.

A lot has been written about the problems around software engineers wanting to rewrite a codebase because of "legacy" issues. Experience has taught me that refactoring is a generally better solution than rewriting because you always have something that works and can be released if necessary.

Although that observation applies to the whole of a project, sometimes the technical debt in component modules means they need reimplementing. Within NetSurf we have historically had problems when such a change was done because of the large number of supported platforms and configurations.

History

A year ago I implemented a Continuous Integration (CI) solution for NetSurf which, combined with our switch to GIT for revision control, has transformed our process. Making several important refactor and rewrites possible while being confident about the overall project stability.

I know it has been a year because the VPS hosting bill from Mythic turned up and we are still a very happy customer. We have taken the opportunity to extend the infrastructure to add additional build systems which is still within the NetSurf projects means.

Over the last twelve months the CI system has attempted over 100,000 builds including the projects libraries and browser. Each commit causes an attempt to build for eight platforms, in multiple configurations with multiple compilers. Because of this the response time to a commit is dependant on the slowest build slave (the mac mini OS X leopard system).

Currently this means a browser build, not including the support libraries, completes in around 450 seconds. The eleven support libraries range from 30 to 330 seconds each. This gives a reasonable response time for most operations. The worst case involves changing the core buildsystem which causes everything to be rebuilt from scratch taking some 40 minutes.

The CI system has gained capability since it was first set up, there are now jobs that:
  • Perform and publish the results of a static analysis for each component using the llvm/clang project scan-build tool.
  • Run coverage reports on modules which support gcov.
  • Build and install full toolchains for the cross compiled target platforms.
  • Run components automated tests on native platforms
  • Generate release sources and packages on a correctly tagged git commit.
  • Generates and publishes the automated Doxygen documentation.

Downsides

It has not all been positive though, the administration required to keep the builds running has been more than expected and it has highlighted just how costly supporting all our platforms is. When I say costly I do not just refer to the physical requirements of providing build slaves but more importantly the time required. 

Some examples include:
  • Procuring the hardware, installing the operating system and configuring the build environment for the OS X slaves
  • Getting the toolchain built and installed for cross compilation
  • Dealing with software upgrades and updates on the systems
  • Solving version issues with interacting parts, especially limiting is the lack of JAVA 1.6 on PPC OS X preventing jenkins updates
This administration is not interesting to me and consumes time which could otherwise be spent improving the browser. Though the benefits of having the system are considered by the development team to outweigh the disadvantages.

The highlighting of the costs of supporting so many platforms has lead us to reevaluate their future viability. Certainly the PPC mac os X port is in gravest danger of being imminently dropped and was only saved when the build slaves drive failed because there were actual users. 

There is also the question of the BeOS platform which we are currently unable to even build with the CI system at all as it cannot be targeted for cross compilation and cannot run a sufficiently complete JAVA implementation to run a jenkins slave.

An unexpected side effect of publishing every CI build has been that many non developer user are directly downloading and using these builds. In some cases we get messages to the mailing list about a specific build while the rest of the job is still ongoing.

Despite the prominent warning on the download area and clear explanation on the mailing lists we still get complaints and opinions about what we should be "allowing" in terms of stability and features with these builds. For anyone else considering allowing general access to CI builds I would recommend a very clear statement of intent and to have a policy prepared for when when users ignore the statement.

Tools

Using jenkins has also been a learning experience. It is generally great but there are some issues I have which, while not insurmountable, are troubling:
Configuration and history cannot easily be stored in a revision control system.
This means our system has to be restored from a backup in case of failure and I cannot simply redeploy it from scratch.

Job filtering, especially for matrix jobs with many combinations, is unnecessarily complicated.
This requires the use of a single text line "combination filter" which is a java expression limiting which combinations are built. An interface allowing the user to graphically select from a grid similar to the output tables showing success would be preferable. Such a tool could even generate the textural combination filter if thats easier.

This is especially problematic of the main browser job which has options for label (platform that can compile the build), javascript enablement, compiler and frontend (the windowing toolkit if you prefer e.g. linux label can build both gtk and framebuffer). The filter for this job is several kilobytes of text which due to the first issue has to be cut and pasted by hand.

Handling of simple makefile based projects is rudimentary.
This has been worked around mainly by creating shell scripts to perform the builds. These scripts are checked into the repositories so they are readily modified. Initially we had the text in each job but that quickly became unmanageable.

Output parsing is limited.
Fortunately several plugins are available which mitigate this issue but I cannot help feeling that they ought to be integrated by default.

Website output is not readily modifiable.
Instead of perhaps providing a default css file and all generated content using that styling someone with knowledge of JAVA must write a plugin to change any of the look and feel of the jenkins tool. I understand this helps all jenkins instances look like the same program but it means integrating jenkins into the rest of our projects web site is not straightforward.

Conclusion

In conclusion I think the CI system is an invaluable tool for almost any non trivial software project but the implementation costs with current tools should not be underestimated.

3 comments:

  1. Could you clarify your requirements for storing Jenkins configuration in a RCS? I assume you're aware of SCM Sync plugin and it's not what you want?

    ReplyDelete
  2. I have been diving in this forum for a long time, but I have not been able to get the activation code of the forum,anime body pillow sale a few days ago,replica rolex watches I mustered up the courage to @ wheel big in microblogging, actually gave me the code, so I organized the recent things on hand, and intend to send an unboxing. The first post should always be a little meaningful, just recently got the 369, although not as popular as the big brothers of the water ghost Dittona and so on, but perhaps to the big brothers for a change of taste,comprare replica rolex so decided to write a pseudo-unboxing about this piece of watch. In fact, the watch bought nearly a month, but I just recently returned to the country, a mess of things a bunch of piles, no suitable photo equipment at hand, can only take the fruit X to make up, the first post, there may be some strange typographical errors, I hope you big brothers more bear.

    ReplyDelete