Plans for 2015: Mentorship and Assessment

This post originally appeared on the Software Carpentry website.

The previous posts in this set looked at instructor training and workshop organization. In this one, I'd like to look at mentorship and assessment, which are two of the biggest challenges we need need to address in the coming year, and are good examples of the kinds of tasks that Steering Committee members will be asked to take on.

Azalee Bostroem did a great job summarizing why we need to mentor our new instructors (and keep those who've been with us a while up to date with changes to our lessons and procedures). The question is, how? Or more precisely, where will the hours come from? I said in September that I would organize a weekly meeting for instructors from recent and upcoming workshops. Only one took place, and saying "we just need to try harder" won't make the necessary hours appear.

The same is true of assessment. Jory Schossau has done valuable service analyzing survey data and interviewing bootcamp participants, and Daniel Chen and others are working hard to revise the instructors' post-workshop questionnaire, but despite several attempts, we haven't found anyone willing to fund a systematic look at what we're actually doing and what impact it's having. Once again, we can either say "we need to try harder" or come up with an alternative plan.

That plan ties back to our new bylaws. Broadly speaking, the people who run a non-profit can be either active or passive, i.e., they can either do the work needed to actually run the operation, or check in once in a while to make sure it's being run properly. Most open source foundations' boards are activist, and ours will be too. People who are elected to the Steering Committee will be expected to volunteer 2-3 hours a week to take charge of specific things, and two of those things to be overseeing mentorship and running assessment. In particular, one Steering Committee member will be asked to organize a half-hour call each week in which recent and upcoming instructors can talk about what's working (or isn't) in our lessons, while another will keep the post-workshop assessment questionnaire up to date, analyze the data we collect from it, and poke instructors to make sure they actually fill it in.

Doing this isn't a small commitment, but we're no longer a small organization. A dozen people are already serving as maintainers for specific topics, and I hope that formalizing our governance will make it easier for others to play a larger role as well.

Dialogue & Discussion

Comments must follow our Code of Conduct.

Edit this page on Github