Some Responses to Some Comments

This post originally appeared on the Software Carpentry website.

Several people have written some useful comments on my recent "where are we going?" posts. It's exactly the kind of feedback I was after, so here are my answers.

Goal #1: helping thousands of people each year.

You propose two very broad ideas for what this would mean: a) community "co-learning" initiatives like Hacker Within, presumably using Software Carpentry content, or somehow organised by SWC? and b) more people contributing to SWC content, as well as supporting others online. In essence, the vision is for both offline and online co-learning communities to exist. It doesn't sounds like you envision SWC as an authorative source for instruction or community, but it is to be a hub of some sort, right?

Hm... I don't like the word "authoritative" (do you think of python.org that way?). Setting that aside, I'd be satisfied if we were either helping people who are running workshops, or a hub for people to share learning materials. Of course, I'd be happier if we were both, and I think they go hand-in-hand:

  1. Research has shown that blended learning is more effective than either offline or online on its own, so we should try to encourage that model.
  2. It's easier to run your first few workshops if you don't have to create all the material from scratch.
  3. If people are trying to meet local needs, they're going to be creating materials to meet those needs. Other people are likely to have the same or similar needs, so we ought to make it easy for them to find and recycle what others have done.

The merging problem.

You've identified the "merging problem" as central to ramping up SWC's reach...why is [it] so central?

It isn't as "central" to Software Carpentry as coming up with a way to tell if we're on the right track or increasing the project's bus factor. I emphasized it in that post because (a) it isn't as widely recognized as an impediment to developing and sharing learning resources as it should be, and (b) it's something that lends itself to technical solutions (which I, as an engineer, am always more comfortable with than purely social solutions).

...even if the merging problem turns out to be a relevant hindrance... [it is] so deep and ill-defined that...I have no reason to believe the payoff from tackling it will arrive in any practical amount of time.

Agreed. That said, there are things we can do to make it easier for people to contribute customizations and extensions. The most important is to use a mergeable format like hand-written HTML for our slides instead of PowerPoint. I moved away from this because it forces us to segregate text and graphics, where PowerPoint makes it easy to mingle them; I suspect that if we do switch, I'll decide a year from now that we should switch back (again).

Institutional support.

One hindrance you identify is that without institutional support, taking software classes will be hard for students/profs/etc. to justify doing. Does this suggest that another vision in 5 years is for there to be a certain level of institutional support for Software Carpentry?

Realistically, I don't expect more institutional support or recognition five years from now than we have today. I think that when our existing system of higher education implodes, it's going to do so faster than anyone ever thought possible (cf. the final days of the Soviet Union), but I think it would be foolish to count on this happening within five years. Conversely, I think that today's model of scholarly publishing is going to last a lot longer than many optimists think it will. That means that journals and funding bodies will look closely at scientific software as infrequently five years from now as they do today, which in turn means that most scientists still won't have a compelling reason to up their game. However, I hope that in five years many (most?) will believe that the writing is on the wall.

Goal #2: We know what we're doing is helping.

The 5 year vision is then...what? We have some justifiable and principled way of gauging the usefulness of these teachings, and that we actually are measuring and reporting on them?

Yes to "gauging" and "reporting", but I don't know about "measuring". We could show that it takes people (much) less time to write a script to analyze their data after we've shown them a few things. We might even be able to show that their scripts are (much) less likely to be buggy. But what we're really trying to do in many cases is change what they're trying to do. We don't want them to copy and paste faster; we want them to write a script that does the copying and pasting, and then write a Makefile that runs the right scripts in the right order whenever there's new data to process, and then a cron job to poll for new data files, and so on. Marian Petre, Jorge Aranda, and others finally made me understand that rigorous qualitative methods are a better way to tackle these kinds of questions; Mozilla's Geoffrey MacDougall has a good post about the fetishistic (ab)use of metrics in the public and non-profit sectors, and thinks that approaches like Most Significant Change are more useful.

That lengthy caveat aside, yes, I think it's essential to develop some generally-accepted way to tell if we're actually doing good or not, to apply it regularly, and to share the results. Without that kind of feedback, we'll have to (continue to) rely on individuals' gut instincts to steer the project; just the thought makes me weary.

Misconceptions.

While "the space of possible misconceptions grows very, very quickly," does the space of common misconceptions grow that quickly? ... I would suggest that it likely pays to start collecting them in a formal way to see.

I strongly agree; many of the improvements in math and physics education since the 1980s are built on the realization that clearing up students' misconceptions is at least as important as giving them new facts. Collecting and classifying misconceptions in order to sharpen teaching is what I'd be doing in academia if anyone had been willing to fund me, but as I've said elsewhere, NSERC, Google, Microsoft, and almost everyone else turned down every application I sent them. (The one exception was The MathWorks [1], whose support allowed us to survey how almost 2000 scientists use computers in their work.)

Discussion and community.

...there is currently no way for a community to build up around this site that can communicate with each other. Maybe that's not something that you want, maybe a forum would take too much time to manage, but if you want people to get involved, I think you need to give them a space they can post their ideas/questions/comments and have other people respond to them.

Agreed. We've had forums for courses, but have otherwise relied on comments and counter-comments for discussion of our topics. It would be easy to set up something more sophisticated, but getting people to come and use it would be much harder. The computational science area on Stack Overflow that's currently in beta might turn into this for us; I'd be very interested in suggestions for ways to mash up our stuff and theirs.

My thanks to Jon Pipitone, Elizabeth Patitsas, and Bill Goffe for their input; please keep it coming.

[1] Thanks, Steve.

Dialogue & Discussion

Comments must follow our Code of Conduct.

Edit this page on Github