Doomed to Repeat It

This post originally appeared on the Software Carpentry website.

Those who cannot learn from history are doomed to repeat it. (Santayana)

I spent a day and a half last week at a workshop on computational science education. There were lots of smart people in attendance, all very passionate about the need to teach scientists how best to use the awesome power of a fully functional mothership—sorry, of modern hardware and software—but as the discussion went on, I grew more and more pessimistic. I'm old enough to remember the first wave of excitement about computational science in the 1980s; dozens of university-level programs were set up (mostly by physicists), and everyone involved was confident that the revolution would be unstoppable.

So there I was, twenty years later, hearing people say almost exactly the same things. "A third kind of science": check. "Need to weave it into the curriculum, rather than tack it on at the end": check. "Encourage interdisciplinarity, and break down the fusty old departmental walls": check. "Revise the tenure process, or even eliminate it": yup, heard that one too. What I didn't hear was, "Here's why the first wave failed, and here's what we're going to do differently."

There was some mention of the fact that we'll either have to drop stuff from existing core curricula to make room for computing, or introduce five- or six-year programs, but it was muted: as soon as you say it out loud, you realize how hard it's going to be, and how unlikely it is to happen. Many participants also fell into the trap of identifying computational science with high-performance computing, or of thinking that the latter was intrinsic to the former. In fact, that's not the case: most computational results are produced on workstations by sequential code, and focusing on the needs of people working at that scale would pay much greater dividends than trying to get them to work on the bleeding edge of massively-parallel GPU-based functional 3D virtual reality splaff.

I was particularly disappointed by how little attention was paid to what I believe are the two biggest problems in computational science: making sure that programs actually do what they're supposed to, and getting them built without heroic effort. I've preached on these topics before, but it wasn't until this workshop that Brent Gorda, Andy Lumsdaine, and I summed it all up. The position paper we wrote for the workshop is included below; I'd be interested to hear what you think.


We have been teaching software development to computational scientists since 1997. Starting with a training course at Los Alamos National Laboratory, our work has grown into an open source course called Software Carpentry that has been used by companies, universities, and the US national labs, and has had over 100,000 visitors since going live in August 2006. Based on our experiences, we believe the following:

  1. People cannot think computationally until they are proficient at programming. (Non-trivial coding is also a prerequisite for thinking sensibly about software architecture, performance, and other issues, but that's a matter for another position paper...)
  2. Like other "knowing how" skills, such proficiency takes so much time to acquire that it cannot be squeezed into existing curricula without either displacing other significant content, or pushing out completion dates. (Saying "we will put a little into every course" is a fudge: five minutes per lecture adds up to three or four courses in a four-year degree, and that time has to come from somewhere.)
  3. Most universities are not willing to do either of these things. The goal stated in the workshop announcement, "Create better scientists not by increasing the number of required credits," is therefore unachievable.
  4. In contrast with experimentalists, most computational scientists care little about the reproducibility or reliability of their results. The main reason for this is that journal editors and reviewers rarely require evidence that programs have been validated, or that good practices have been followed—in fact, most would not know what to look for.
  5. Most scientists will not change their working habits unless the changes are presented in measured steps, each of which yields an immediate increase in productivity. In particular, the senior scientists who control research laboratories and university departments have seen so many bandwagons roll by over the years that they are unlikely to get excited about new ones unless the up-front costs are low, and the rewards are quickly visible.
  6. The most effective way to introduce new tools and techniques is over an extended period, in a staged fashion, in parallel with actual use—intensive short courses are much less effective (or compelling) than mentorship. Peer pressure helps: if training is offered repeatedly, and scientists see that it is making their colleagues more productive, they will be more likely to take it as well. Team learning also helps: non-trivial scientific programming is a team sport, and mentorship is one of the best training methods we know. In particular, team members with distinct roles rallying around the science is the most successful strategy of all, but does not translate into a classroom setting.
  7. A mix of traditional and agile methodologies is more appropriate for the majority of scientific developers than either approach on its own: scientists are question-led (which encourages incremental development), but the need for high performance often mandates careful up-front design. Scientists cannot therefore simply adopt commercial software engineering practices, but will have to tailor them to their needs.
  8. Raising general proficiency levels is the only way to raise standards for computational work, and raising standards is necessary if we are to avoid some sort of "computational thalidomide" in the near future.

Later: see also this rant by Victor Ng, and this piece by Kyle Wilson (no relation).

Dialogue & Discussion

Comments must follow our Code of Conduct.

Edit this page on Github