Measuring Science
This post originally appeared on the Software Carpentry website.
Julia Lane, the director of the Science of Science & Innovation Policy program at the National Science Foundation, wrote an article for Nature a couple of weeks ago titled "Let's make science metrics more scientific". As the summary at the start says:
- Existing metrics have known flaws
- A reliable, open, joined-up data infrastructure is needed
- Data should be collected on the full range of scientists' work
- Social scientists and economists should be involved
The same points could be made about evaluating software developers (or any other kind of knowledge worker). The devil, as always, is in the details, and unfortunately I have to start doing evaluations before those details are worked out. Several of the supporters for this course need me to demonstrate its impact on the productivity of the scientists who take it (so that they can in turn justify their contribution to their funders). It isn't enough to ask students who have completed the course whether they think they know more about programming than they used to: ignoring the obvious problems of survivor bias and self-assessment, I would still have to demonstrate that making people better programmers also makes them better scientists. I believe it does, but belief is not evidence, and doesn't convey scale.
The best plan I've been able to come up with so far is to look at how scientists spend their time before and after taking the course, but that would require resources I don't have. If you're interested in studying scientists or software developers empirically, and would like some raw material, I'd like to hear from you.