Category Archives: Outreach

Why you can ignore reviews of scientific code by commercial software developers

tl;dr: Many scientists write code that is crappy stylistically, but which is nevertheless scientifically correct (following rigorous checking/validation of outputs etc). Professional commercial software developers are well-qualified to review code style, but most don’t have a clue about checking scientific validity or what counts as good scientific practice. Criticisms of the Imperial Covid-Sim model from some of the latter are overstated at best.

Update (2020-06-02): The CODECHECK project has independently reproduced the results of one of the key reports (“Report 9”) that was based on the Imperial code, addressing some of the objections raised in the spurious “reviews” that are the subject of this article.

I’ve been watching with increasing horror as the most credible providers of scientific evidence and advice surrounding the Coronavirus outbreak have come under attack by various politically-motivated parties — ranging from the UK’s famously partisan newspapers and their allied politicians, to curious “grassroots” organisations that have sprung up overnight, to armies of numerically-handled Twitter accounts. While there will surely be cause for a sturdy review of the UK’s SAGE (Scientific Advice Group for Emergencies) system at some point very soon, it seems clear that a number of misinformation campaigns are in full flow that are trying to undermine and discredit this important source of independent, public-interest scientific advice in order to advance particular causes. Needless to say, this could be very dangerous — discrediting the very people we most need to listen to could produce very dire results.

The strategies being used to undermine SAGE advisers will be familiar to anyone who has worked in fields related to climate change or vaccination in recent decades. I will focus on one in particular here — the use of “experts” in other fields to cast doubt on the soundness of the actual experts in the field itself. In particular, this is an attempt to explain what’s so problematic about articles like this one, which are being used as ammunition for disingenuous political pieces like this one [paywall]. Both articles are clearly written with a particular political viewpoint in mind, but they have a germ of credibility in that the critique is supposedly coming from an expert in something that seems relevant. In this case, the (anonymous) expert claims to be a professional software developer with 30 years’ experience, including working at a well-regarded software company. They are credentialled, so they must be credible, right? Who better to review the Imperial group’s epidemiology code than a software developer?

Most of what I’m going to say is a less succinct restatement of an old article by John D. Cook, a maths/statistics/computing consultant. Cook explains how scientists use their code as an “exoskeleton” — a constantly-evolving tool to help themselves (and perhaps a small group around them) answer particular questions — rather than as an engineered “product” intended to solve a pre-specified problem for a separate set of users who will likely never see or modify the code themselves. While both scientists and software developers may write code for a living — perhaps even using the same programming language and similar development tools — that doesn’t mean they are trying to achieve similar aims with their code. A software developer will care more about maintainability and end-user experience than a scientific coder, who will likely prize flexibility and control instead. Importantly, this means that programming patterns and norms that work for one may not work for the other — exhortations to keep code simple, to remove cruft, to limit the number of parameters and settings, might actually interfere with the intended applications of a scientific code for example.

Software development heuristics that don’t apply to scientific code

The key flaw in the “Lockdown Sceptics” article is that they apply software engineering heuristics to assess the quality of the code when they simply don’t apply here. As someone who has spent a good fraction of my scientific career writing and working with modelling codes of different types, let me first try to set out the desirable properties of a high-quality scientific code. For most modelling applications, this will be:

  • Scientific correctness: A mathematically and logically correct representation of the model(s) that are being studied, as well as a correct handling and interpretation of any input data. This is distinct from “correctly fitting observations” — while finding a best-fit model to some data might be one aim of a code, the ability to explore counterfactuals is also important. A code may implement a model that is a terrible fit to the data, but can still be “high quality” in the sense that it correctly implements a counterfactual model.
  • Flexibility: The ability to add, adjust, turn on/off different effects, try different assumptions etc. These codes are normally exploratory, and will be used for studying a number of different questions over time, including many that will only arise long after the initial development. Large numbers of parameters and copious if statements are the norm.
  • Performance: Sufficient speed and/or precision to allow the scientist to answer questions satisfactorily. Repeatability and numerical precision fall under this category, as well as raw computational performance. (It is also common for scientific codes to have settings that allow the user to trade-off speed vs accuracy, depending on the application.)

Note that I have in mind the kinds of codes used by specialist groups that are usually seeking to model certain classes of phenomena. There are other types of scientific code intended for more general use with goals that hew closer to what software engineers are generally trying to achieve. The Imperial code does not fall into this second category however.

What things are missing from this list that would be high priority for a professional software developer? Here are a few:

  • Maintainability: Most scientific codes aren’t developed with future maintainers in mind. As per John Cook, they are more likely to be developed as “exoskeletons” by and for a particular scientist, growing organically over time as new questions come up. Maintainability is a nice-to-have, especially if others will use the code, but it has little bearing on the code’s scientific quality. Some scientifically valuable codes are really annoying to modify!
  • Documentation: Providing code and end-user documentation is a good practice, but it’s not essential for scientific codes. Different fields have different norms surrounding whether code is open sourced on publication or simply available on request for example, and plenty of scientists do a bad job of including comments or writing nice-to-read code. This is because the code is rarely the end product in itself — it is normally just a means to run some particular mathematical model that is then presented (and defended and dissected) in a journal article. The methods, assumptions, and consistency and accuracy of the results is what matters — the code itself can be an ugly mess as long as it’s scientifically correct in this sense.
  • User-proofing/error checking: To a software developer, a well-engineered code shouldn’t require any knowledge of internal implementation details on the part of the end user. The code should check user inputs for validity, and, to the greatest extent possible, prevent them from doing things that are wrong, or invalid, or that will produce nonsense results, for the widest possible range of possible inputs. Some level of error-checking is nice to have in scientific codes too, but in many cases the code is presented “as-is” — the user is expected to determine what correct and valid inputs are themselves, through understanding of the internals and the scientific principles behind them. In fact, a code may even, intentionally, produce an output that is known to be “wrong” in some ways and “right” in others — e.g. the amplitude of a curve is wrong, but its shape is right. In essence, the user is assumed to understand all of the (known/intended) limitations of the code and its outputs. This will generally be the case if you run the code yourself and are an expert in your particular field.
  • Formal testing: Software developers know the value of a test suite: Write unit tests for everything; throw lots of invalid inputs at the code to check it doesn’t fall over; use continuous integration or similar to routinely test for regressions. This is good practise that can often catch bugs. Setting up such infrastructure is still not the norm in scientific code development however. So how do scientists deal with regressions and so on? The answer is that most use ad hoc methods to check for issues. When a new result first comes out of the code, we tend to study it to death. Does it make sense? If I change this setting, does it respond as expected? Can it reproduce idealised/previous results? Does it agree with this alternative but equivalent approach? Again, this is a key part of the scientific process. We also output meaningful intermediate results of the code as a matter of course. Remember that we are generally dealing with quantities that correspond to something in the real world — you can make sure you aren’t propagating a negative number of deaths in the middle of your calculation for example. While these checks could also be handled by unit tests, most scientists generally just end up with their own weird set of ad hoc test outputs and print statements. It’s ugly, and not infallible, but it tends to work well given the intensive nature of our result-testing behaviour and community cross-checking.

These last four points will horrify most software developers (and I know quite a few — I was active in the FOSS movement for a solid decade; buy my book etc etc). Skipping these things is terrible practise if you’re developing software for end-users. But for scientific software, it’s not so important. If you have users other than yourself, they will figure things out after a while (a favourite project for starting grad students!) or email you to ask. If you put in invalid inputs, your testing and other types of scientific examination of the results will generally uncover the error. And, really, who cares if your code is ugly and messy? As long as you are doing the right things to properly check the scientific results before publishing them, it doesn’t matter if you wrote it in bloody Perl with Russian comments — the quality of the scientific results is what matters, not the quality of the code itself. This is well understood throughout the scientific community.

In summary, most scientific modelling codes are expected to be used by user-developers with extensive internal knowledge of the code, the model, and the assumptions behind it, and who are routinely performing a wide variety of checks for correctness before doing anything with the results. In the right hands, you can have a lot of confidence that sensible, rigorous results are being obtained; however they are not for non-expert users.

Specific misunderstandings in the “Lockdown Sceptics” article

I will caveat this section with the fact that I am an astrophysicist and not an epidemiologist, so can’t critique the model assumptions or even really the extent to which it has been implemented well in the Imperial code. I can explain where I think the Lockdown Sceptics article has missed the point of this kind of code though.

Non-deterministic outputs: This is the most important one, as it could, in particular circumstances, be a valid criticism. The model implemented by this code is a stochastic model, and so is expected to produce outputs with some level of randomness (it is exploring a particular realisation of some probability distribution; running it many times will allow us to reconstruct that distribution, a method called Monte Carlo). Computers deal in “pseudo-randomness” though; given the same starting “seed” they will produce the same random-looking sequence. A review by a competing group in Edinburgh found a bug that resulted in different results for the same seed, which is generally not what you’d want to happen. As you can see at that link, a developer of the Imperial code acknowledged the bug and gave some explanation of its impact.

The key question here is whether the bug could have caused materially incorrect results in published papers or advice. Based on the response of the developer, I would expect not. They are clearly aware of similar types of behaviour happening before, which implies that they have run the code in ways that could pick up this kind of behaviour (i.e. they are running some reproducibility tests — standard scientific practise). The bug is not unknown. A particular workaround here appears to be re-running the model many times with different seeds, which is what you’d do with this code anyway; or using different settings that don’t seem to suffer from this bug. My guess is that the “false stochasticity” caused by this bug is simply inconsequential, or that it doesn’t occur with the way they normally run the code. They aren’t worried about it — not because this is a disaster they are trying to cover up, but because this is a routine bug that doesn’t really affect anything important.

Again, this is bread and butter for scientific programming. They have seen the issue before, and so are aware of this limitation of the code. Ideally they would have fixed the bug, yes, but with this sort of code we’re not normally trying to reach a state of near-perfection ready for a point release or some such, as with commercial software. Instead, the code is being used in a constantly evolving state. So perhaps, being aware of it, it’s just not a very high priority to fix given how they are using the code. Indeed, why would they run the code in such a way that the bug arises and knowingly invalidates their results? It’s pretty clear this is not a major result-invalidating bug from their behaviour (and the behaviour of the reporter from Edinburgh) alone.

Undocumented equations: See above regarding the approach to documentation. It would definitely be much more user-friendly to document the equations, but does it mean that the code is bad? No. For all we know, there is a scruffy old LaTeX note explaining the equations, or they are in one of the early papers (either are common). This is totally normal — ugly, and not helpful for the non-expert trying to make sense of the code, but not an indicator of poor code quality.

Continuing development: As per the above, scientific codes of this kind generally evolve as they need to, rather than aiming for a particular release date or set of features. Continuing development is the usual, and things like bugfixes are applied as and when they crop up. Serious issues that affect previously published results would normally prompt an erratum (e.g. see this one of mine); some scientists are less good about issuing errata (or corrective follow-up papers) than others, especially for more minor issues, although covering up a really serious issue would be a career-ending ethical violation for most. As I hope I’m making clear from the above, the article’s charge of serious “quality problems” isn’t actually borne out though; they are just (harmlessly!) violating the norms they are used to from a completely different field.

Some other misunderstandings from that article:

  • “the original program was ‘a single 15,000 line file that had been worked on for a decade’ (this is considered extremely poor practice)” — Not to a scientist it’s not! If anything, the fact that the group have been plugging away at this code for a decade, with an increasing number of collaborators, confronting it with more and more peer reviewers, and withstanding more and more comparisons from other groups, gives me more confidence in it. It certainly improves the chances that substantial bugs would have been found and resolved over time, or structural flaws noticed. Young, unproven codes are the dangerous ones! And while the large mono-file structure will surely be annoying to work with (and so is poor in that sense), it has no bearing on the actual scientific correctness of the code.
  • “A request for the original code was made 8 days ago but ignored, and it will probably take some kind of legal compulsion to make them release it. Clearly, Imperial are too embarrassed by the state of it ever to release it of their own free will, which is unacceptable given that it was paid for by the taxpayer and belongs to them.” — This is a tell-tale sign that this person isn’t a scientist. First, the motto of most academics is “Apologies for the late reply”! Waiting 8 days for a reply to a potentially complicated and labour-intensive request is nothing, especially as the group is obviously busy with more urgent matters. Second, there’s no saying that the taxpayer paid for most of the code (it could be funded by a charitable foundation like the Wellcome Trust for example), and the code will likely remain the IP of the author, but with Imperial retaining a perpetual license to it. Instead, the obligation for openness here comes from the publications that use the code. Most journals require that authors make code and data used to produce results in particular journal articles available on request. Note that some scientists are cagey about releasing their code fully publicly because they worry about competitors co-opting it (and not without good reason). I personally have made all of my scientific code available by default however, and it’s good that this group are making theirs fully public too. It’s the right thing to do in this scenario (and we should also recognise previously opened codes, such as the one from the LSHTM group also used by SAGE).
  • “What it’s doing is best described as ‘SimCity without the graphics'” — Fantastic! The original SimCity used a tremendously sophisticated model for what it did, and has even been used in teaching town planners (I remember reading the manual in the 90s).
  • “The people in the Imperial team would quickly do a lot better if placed in the context of a well run software company….the difference between ICL and the software industry is the latter has processes to detect and prevent mistakes” — Now really, this is teaching grandma to suck eggs. Remember that much of modern programming emerged from academic science. This is not to say that your average scientific programmer couldn’t stand to learn some cleaner coding practises, but to accuse scientists of not having processes to detect and prevent mistakes — ludicrous! The bedrock of the scientific process is in validation and self-correction of results, and we have plenty of highly effective tools in our arsenal to handle that thank you very much. Now I have found unit testing, continuous integration etc. to be useful in some of my more infrastructural projects, but they are conveniences rather than necessities. Practically every scientist I know spends most of their time checking their results rather than coding, and I sincerely doubt that the Imperial group is any different. If anything, in most fields there is a culture of “conspicuous correctness” — finding mistakes in the work of others is a highly prized activity (especially if they are direct competitors).
  • “Models that consume their own outputs as inputs is problem well known to the private sector – it can lead to rapid divergence and incorrect predictions” — This is a highly simplistic way of looking at things, and I suspect the author doesn’t know much about this kind of method. I don’t know specifically what the Imperial folks are doing here, but there are important classes of methods that use feedback loops of this kind called iterative methods that are common in solving complicated systems of coupled equations and so on, which are mathematically highly rigorous. I have used them in a statistical modelling context on occasion. Ferguson and co are from highly numerate backgrounds, and so I think it’s safe to assume they’re not missing obvious problems of this kind.

I could go on, but hopefully this is enough to establish my case — the author of that article is out of their depth, and clearly unaware of many of the basics of numerical modelling or the way this kind of science is done (with great historical success across many fields, might I add). In fact, they are so far out that they don’t even realise how silly this all sounds to someone with even a cursory knowledge of this kind of thing — it is an almost perfect study in the Dunning-Kruger effect. How they reached the conclusion that scientists must be so incompetent that “all academic epidemiology [should] be defunded” and that “This sort of work is best done by the insurance sector” is truly remarkable — there is a remarkable arrogance in overlooking the possibility that, just maybe, the failure is in their own understanding.

The peculiarly implausible nature of the accusations

What I have discussed above is a (by no means complete) explanation of how many, if not most, scientific modellers approach their job. I have no special insight into the workings of the Imperial group; this is more an attempt to explain some of the sociology, attitude, and norms of quantitative scientific modelling. Professional software developers will hate some of these norms because they are bad end-user software engineering — but this doesn’t actually matter, since scientific correctness of a code typically owes little to the state of its engineering, and we have a very different notion of who the end user is compared to a company like Google. Instead, we have our own well-worn methods of checking that our codes — our exoskeletons — are scientifically correct in the most pertinent ways, backed up by decades of experience and rabid, competitive cross-checking of results. This, really, is all that matters in terms of scientific code quality, whether you’re publishing prospective theories of particle physics or informing the public health response to an unprecedented pandemic.

Let’s not lose sight of the bigger picture here though. The point of the Lockdown Sceptics article is to challenge the validity of the Imperial code by painting it as shoddy, and presumably, therefore, to undermine the basis of particular actions that may have taken note of SAGE advice. For this to actually be the case, though, the Imperial group must have (a) evaded detection for over 10 years from a global community of competing experts; (b) be almost criminally negligent as scientists, by having ignored easily-discovered but consequential bugs; (c) be almost criminally arrogant to suppose that their unchecked/flawed model should be used to inform such big decisions; and (d) for the entire scientific advisory establishment to have been taken for a ride without any thought to question what they were being told. Boiling this all down, the author is calling several hundred eminent scientists — in the UK and elsewhere — complete idiots, while they, through maybe half an hour of cursory inspection, have found the “true, flawed” nature of the code through a series of noddy issues.

This is clearly very silly, as even an ounce of introspection would have revealed to the author of that article had they started out with innocent motives. Their “code review” is no such thing however — instead, it is a blatant hatchet job, of the kind we have come to expect from climate change deniers and other anti-science types. The article author’s expertise in software development (which I will recognise, despite their anonymity) is of entirely the wrong type to actually, meaningfully, review this codebase, and is clearly misapplied. You may as well ask a Java UI programmer to review security bugs in the Linux kernel. To then rejoice in the fact that this has been picked up by an obviously lop-sided wing of the press and used to push a potentially harmful agenda, against the best scientific evidence we have, is chilling.


Press complaint: Daily Mail vs. BICEP2 commentators

In March of this year, immediately following the jubilation surrounding the BICEP2 results, the Daily Mail published a bizarre opinion piece on two scientists that were interviewed about the experiment on BBC’s Newsnight programme. The gist of the article was that the Beeb was cynically polishing its “political correctness” credentials by inviting the scientists to the programme, because they were both non-white and non-male. More details about the debacle can be found in this Guardian article.

Now, I’m not much of a Daily Mail fan at the best of times, but this struck me as particularly egregious; not only were their facts wrong and their tone borderline racist and sexist (in my opinion, at least), but they also seemed to be mistaking science for some sort of all white, all-boys club that women and people of other ethnic groups have no right to involve themselves with. This is damaging to all of us in science, not just those who were personally attacked – so I complained.

I just received word back on my complaint, which was sent to the Press Complaints Commission in the UK, who have the job of (sort of) regulating the press. Their response is reproduced below in full; my allegation of factual inaccuracy was upheld, but they declined to act on the allegation of inappropriate racial/gender commentary because I wasn’t one of the parties being discussed.

Commission’s decision in the case of

A man [me] v Daily Mail

The complainant expressed concern about an article which he considered to have been inaccurate and discriminatory, in breach of Clauses 1 (Accuracy) and 12 (Discrimination) of the Editors’ Code of Practice. The article was a comment piece, in which the columnist had critically noted Newsnight’s selection of “two women….to comment on [a] report about (white, male) American scientists who’ve detected the origins of the universe”.

Under the terms of Clause 1 (i) of the Code, newspapers must take care not to publish inaccurate information, and under Clause 1 (ii) a significant inaccuracy or misleading statement must be corrected promptly, and with “due prominence”.

The newspaper explained that its columnist’s focus on gender and ethnicity was designed to be nothing more than a “cheeky reference” to the BBC’s alleged political correctness. In the columnist’s view, the selection of Dr Maggie Aderin-Pocock and Dr Hiranya Peiris to comment on the BICEP2 (Background Imaging of Cosmic Extragalactic Polarisation) study was another such example of this institutional approach.

The complainant, however, noted the BICEP2 team were, in fact, a diverse, multi-ethnic, multi-national group which included women, something which the newspaper accepted. Furthermore, he said that white, male scientists had been interviewed on Newsnight as well, which undermined the columnist’s claim that Dr Maggie Aderin-Pocock and Dr Hiranya Peiris had been specifically selected. The suggestion that the BICEP2 team were all white and male was a basic error of fact and one which appropriate checks could have helped to prevent. There had been a clear failure to take care not to publish inaccurate information, and a corresponding breach of Clause 1 (i) of the Code.

The newspaper took a number of measures to address the situation: the managing editor wrote to both Dr Aderin-Pocock and Dr Peiris; a letter criticising the columnist’s argument was published the following day; its columnist later explicitly noted both scientists expertise, and competence to comment on the study; and, a correction was published promptly in the newspaper Corrections & clarifications column which acknowledged that the BICEP2 study was “conducted by a diverse team of astronomers from around the world”, and which “apologis[ed] for any suggestion to the contrary”. The latter measure was sufficient to meet the newspaper’s obligation under Clause 1 (ii) of the Code, to correct significantly misleading information.

The columnist’s suggestion that Dr Aderin-Pocock and Dr Peiris were specifically selected for the Newsnight programme because of “political correctness” was clearly presented as his own comment and conjecture which, under Clause 1 (iii) and the principle of freedom of expression, he was entitled to share with readers. There was, therefore, no breach of the Code in publishing that suggestion. However, the subsequent correction of the factual inaccuracy regarding the BICEP2 team and the acknowledgment of both experts’ expertise will have allowed readers to assess the suggestion in a new light.

Under Clause 12 (Discrimination) (ii) of the Code, “details of an individual’s race, colour, religion, sexual orientation, physical or mental illness or disability must be avoided unless genuinely relevant to the story”. The complainant’s concerns under this Clause were twofold; he believed that the references to the gender and ethnic background of both Dr Aderin-Pocock and Dr Peiris, and the BICEP2 team members, were irrelevant in a column about a scientific study. While the terms of Clause 12 (ii) do not cover irrelevant references to gender, the Commission would need to have received a complaint from a member, or members of the BICEP2 team, or Dr Aderin-Pocock or Dr Peiris in order to consider the complaint about under this Clause. In the absence of any such complaints, the Commission could not comment further.


Patrick Moore

Sir Patrick Moore passed away today, aged 89. Through his many, many books, and TV programmes like The Sky at Night, he became Britain’s foremost populariser of astronomy. It’s safe to say he’s inspired several generations of scientists, showing us the poignant beauty of the heavens and incredible excitement of space exploration, all in his own, unique way.

I have a particularly fond memory of him. When I was quite young, probably around 8 years old, my dad and uncle took me to see Patrick give a lecture at the Victoria Hall in Stoke. I seem to remember that much of it was about Mars, and how astronomers past had mistakenly seen canals and other marks of civilisation on its surface. I still have a copy of a little red book of his, “Into Space!”, lying around somewhere at home, that we bought on the night. We also took my copy of Philip’s Atlas of the Universe (another of his), which he graciously signed for me after the lecture. I can’t remember what he said to me, other than that he’d sprained his wrist, so could only manage to scrawl his initials on the first page! I recall being slightly put out by this, for some reason – perhaps because it looked like someone had randomly scrawled on the book, thus defacing it. (Anyone who knows me will have some appreciation of how cardinal a sin the defacement of a book, however slight, is in my eyes.)

Meeting Patrick was one of a number of important events in my development that happened at around the same time. My dad bought me a little black Tasco refractor, and managed to get a stunning view of Jupiter out of it, one that I couldn’t reproduce myself for many years. A couple of years earlier, he’d woken me up in the middle of the night, literally carrying me out of bed to see a lunar eclipse. My parents had also been indulging me by buying science books, which I absolutely lapped up. One of these was Patrick’s Atlas of the Universe, the one he signed, which I often dipped into. In 1999, there was also a solar eclipse in Britain, only partial in Stoke but reaching totality in Cornwall, which I remember Patrick doing the commentary for.

These events, along with many others of a similar nature over the years, have shaped me both intellectually and personally. Being an astrophysicist is a big part of who I am, and I’m forever grateful to all of the people like Patrick who set me out on this path all those years ago. I only hope I can repay the debt by inspiring others myself.


NAM Jodcast interview

Hmmm, don’t think I’ve mentioned this on here before, but I was interviewed by Close Personal Friend (TM) Christina Smith for the Jodcast a few months back, while I was at NAM. Listen agape as I, ahem, masterfully discuss, erm, whatever it is my research is about.

And no, I refuse to believe that I sound like that in real life.