Improving Instruction
This post originally appeared on the Software Carpentry website.
It's been quite a year for Software Carpentry instructor training: we had a great session at UC Davis (see also this excellent post from Rayna Harris) and another at the Research Bazaar kickoff in Melbourne. We've also started the twelfth round of online instructor training with 75 participants, making it our largest yet.
All of this has led to a flurry of activity in our GitHub repositories. The comments and pull requests are very welcome, but we need to keep three things in mind:
We already have more material than we can cover in two days.
We are not our learners.
No lesson survives its first presentation intact.
The first point is the most difficult. Everyone can think of things we ought to teach; the hard part is agreeing on what to take out to make room for those additions. For example, I really want to include dictionaries in our basic Python tutorial, but I'm damned if I can figure out what it's greater than, i.e., what to drop to free up the half hour it takes for people to learn what they are and how they work.
The common response is, "Half an hour? It only takes a minute!" but that brings us to the second point: we are not our learners. The people who are most likely to offer contributions to lessons are the ones who are most confident in their computing skills. That does not necessarily imply they are the most competent, but it does (usually) mean that they already understand these ideas, and therefore under-estimate how hard they are for newcomers.
In particular, new instructors often under-estimate how often "little" things will go wrong in incomprehensible ways. and how hard they are to fix without a working mental model of how all this stuff works. What we're trying to teach is intrinsically complex. The accidental complexity imposed by lousy tools makes it harder, and it's all too easy for people who've climbed a hill to forget how steep it was.
The reverse is true of lessons: it's easy for someone who hasn't climbed a hill to underestimate how hard a slog it will be. In particular, a lesson that's been used and patched a double dozen times can appear no different from one I made up last night, especially since we're still doing a poor job of capturing our in-class experiences in our instructors' guides. One consequence of this is new instructors using their own untested material in workshops, which has gone poorly more often than it has gone well.
My previous musings on teaching didn't end with anything actionable, so this one should. First, I'm going to spend a lot more time discussing our lessons during instructor training; in particular, I'm going to ask some experienced instructors to help me do walkthroughs of the lessons with the trainees. Second, I'd like to propose that:
Every time someone suggests adding a new topic to one of our lessons, they have to tell us what they would remove to make room for it. That doesn't mean something will be removed, but I think it will help clarify discussion about teaching priorities. (This rule wouldn't apply to adding new exercises, instructors' notes, or anything else that doesn't increase the length of the lesson.)
Instructors should be required to teach our standard lesson on a topic at least a couple of times before replacing its content with material of their own. Minor adjustments to fit particular learners and personal teaching styles are always OK—the whole point of teaching live instead of using recorded video is to allow this. So is adding extra material for particular audiences, provided the core material hasn't been rushed through, but wholesale replacement should wait until the instructor understands why the stuff they're replacing looks the way it does.
I don't want to scare people off or stifle innovation: after all, if we never try new things, our lessons will never get better. However, as more instructors and other contributors join us, we need better guidelines. Voting on changes to lessons at each lab meeting proved unworkable; having topic maintainers is much better; the post-workshop instructor debriefings that Sheldon McKay and Rayna Harris are running will hopefully help us capture more of our experiences in shareable ways. I'd be grateful for feedback on whether these ideas are (a) good and (b) workable—please add them as comments on this blog post, and I'll summarize next week.