AI-generated (stable diffusion) ge of "cyclon writing with a pen".

The sporadic blog of David J A Cooper. I write sci-fi, teach software engineering, and occasionally say related (or not related) things.

Check out Nova Sapiens: The Believers.

Open source science

Slashdot notes an article from the Guardian: “If you’re going to do good science, release the computer code too“. The author is, Darrel Ince, is a Professor of Computing at The Open University. You might recognise something of the mayhem that is the climate change debate in the title.

Both the public release of scientific software and the defect content thereof are worthwhile topics for discussion. Unfortunately, Ince seems to go for over-the-top rhetoric without having a great deal of evidence to support his position.

For instance, Ince cites an article by Professor Les Hatton (who I also cite on account of his recent study on software inspection checklists). Hatton’s article here was on defects in scientific software. The unwary reader might get the impression that Hatton was specifically targetting recent climate modelling software, since that’s the theme of Ince’s article. However, Hatton discusses studies conducted from 1990-1994, in different scientific disciplines. The results might still be applicable, but it’s odd that the Ince would choose to cite such an old article as his only source. There are much newer and more relevant papers; for instance:

S. M. Easterbrook and T. C. Johns (2009), Engineering the Software for Understanding Climate Change, Computing in Science and Engineering.

I stumbled across this article within ten minutes of searching. While Hatton takes a broad sample of software from across disciplines, Easterbrook and Johns  delve into the processes employed specifically in the development of climate modelling software. Hatton reports defect densities of around 8 or 12 per KLOC (thousand lines of code), while Easterbrook and Johns suggest 0.03 defects per KLOC for the current version of the climate modelling software under analysis. Quite a difference – two orders of magnitude, for those counting.

Based on Hatton’s findings of the defectiveness of scientific software, Ince says:

This is hugely worrying when you realise that just one error — just one — will usually invalidate a computer program.

This is a profoundly strange thing for a Professor of Computing to say. It’s certainly true that one single error can invalidate a computer program, but whether it usually does this is not so obvious. There is no theory to support this proclamation, nor any empirical study (at least, none cited). Non-scientific programs are littered with bugs, and yet they are not useless. Easterbrook and Johns report that many defects, before being fixed, had been “treated as acceptable model imperfections in previous releases”, clearly not the sort of defects that would invalidate the model. After all, models never correspond perfectly to empirical observations anyway, especially in such complex systems as climate.

Ince claims, as a running theme, that:

Many climate scientists have refused to publish their computer programs.

His only example of this is Mann, who by Ince’s own admission did eventually release his code. The climate modelling software examined by Easterbrook and Johns is available under licence to other researchers, and RealClimate lists several more publicly-available climate modelling programs. I am left wondering what Ince is actually complaining about.

Finally, Ince seems to have a rather brutal view of what constitutes acceptable scientific behaviour:

So, if you are publishing research articles that use computer programs, if you want to claim that you are engaging in science, the programs are in your possession and you will not release them then I would not regard you as a scientist; I would also regard any papers based on the software as null and void.

This is quite a militant position, and does not sound like a scientist speaking. If Ince himself is to be believed (in that published climate research is often based on un-released code), then the reviewers of those papers who recommended publication clearly didn’t think as Ince does – that the code must be released.

Ince may be convinced that scientific software must be publicly-auditable. However, scientific validity ultimately derives from methodological rigour and the reproducibility of results, not from the availability of source code. The latter may be a good idea, but it is not necessary in order to ensure confidence in the science. Other independent researchers should be able to confirm or contradict your results without requiring your source code, because you should have explained all the important details in published papers. (In the event that your results are not reproducible due to a software defect, releasing the source code may help to pinpoint the problem, but that’s after the problem has been noticed.)

There was a time before computing power was widely available, when model calculations were evaluated manually. How on Earth did science cope back then, when there was no software to release?


Posted

in

by

Comments

2 responses to “Open source science”

  1. […] Update: The Guardian never published my letter, but I did find a few other rebuttals to Ince’s article in various blogs. Davec’s is my favourite! […]

  2. Steve Easterbrook Avatar

    Dave,
    Thanks for writing that – it’s a very nice rebuttal of Ince’s letter. I wrote to the Guardian in response to the Ince piece, but they never published my letter. It’s here:
    http://www.easterbrook.ca/steve/?p=1388

    I like the tone of your rebuttal better!