Organizing Instruction and Study to Improve Student Learning
This post originally appeared on the Software Carpentry website.
I had breakfast a couple of days ago with Jon Pipitone, a former student who has helped out with Software Carpentry off and on in the past. When we discussed my post summarizing what I've learned so far about online education, he had several questions and suggestions (thanks, Jon). I'm still digesting everything he said, but there is one point I'd like to act on now.
Back in 2007, the US Department of Education's Institute of Education Sciences published a 60-page report Organizing Instruction and Study to Improve Student Learning. It's a great resource: seven specific recommendations are summarized in clear language, along with the evidence that backs them up. The report even classifies that evidence into three levels:
- Strong: supported by studies with both high internal validity (i.e., ones whose designs can support causal conclusions) and high external validity (i.e., studies that include a wide enough range of participants and settings to support generalization). This includes the gold standard of experimental research: randomized double-blind trials.
- Moderate: supported by studies with high internal validity but moderate external validity or vice versa. A lot of studies necessarily fall into this category because of the difficulty (practical or ethical) of doing randomized trials on human subjects, but quasi-experiments and/or experiments with smaller sample sizes qualify.
- Low: based on expert opinion derived from strong findings or theories in related areas and/or buttressed by direct evidence that does not meet the standards above. For example, applying a well-proven theory from perceptual psychology to classroom settings, but only validating it with experiments involving small numbers of students, constitutes, would be considered a low level of evidence.
The recommendations themselves are:
- Space learning over time. Arrange to review key elements of course content after a delay of several weeks to several months after initial presentation. (moderate)
- Interleave worked example solutions with problem-solving exercises. Have students alternate between reading already worked solutions and trying to solve problems on their own. (moderate)
- Combine graphics with verbal descriptions. Combine graphical presentations (e.g., graphs, figures) that illustrate key processes and procedures with verbal descriptions. (moderate)
- Connect and integrate abstract and concrete representations of concepts. Connect and integrate abstract representations of a concept with concrete representations of the same concept. (moderate)
- Use quizzing to promote learning.
- Use pre-questions to introduce a new topic. (minimal)
- Use quizzes to re-expose students to key content (strong)
- Help students allocate study time efficiently.
- Teach students how to use delayed judgments of learning to identify content that needs further study. (minimal)
- Use tests and quizzes to identify content that needs to be learned (minimal)
- Ask deep explanatory questions. Use instructional prompts that encourage students to pose and answer "deep-level" questions on course material. These questions enable students to respond with explanations and supports deep understanding of taught material. (strong)
This summary doesn't do the report justice: it devotes several pages to each recommendation, and includes advice both on how to implement them, and how to overcome common roadblocks.
So how well does Software Carpentry implement these ideas? More specifically:
- What things are we doing already?
- Why aren't we doing the things we're not?
- If the answer is, "Because it's hard," can we make it easier?
1. Space learning over time. No. On one hand, students are dipping into the material when and as they want, and are free to return to any part of it at any time. On the other hand, they don't have to return to any of it, and when they do, they're returning to exactly the same material, not a different take on the same content.
2. Interleave worked example solutions with problem-solving exercises. We don't do this now. We could provide the worked examples, but the exercises would be much harder to do:
- We have no practical way to assess students' performance. (In fact, we can't even tell if we got their fingers on the keyboard.) Yes, we can check that they produce the right output for simple programming problems, but (a) that doesn't tell us if they got that output the right way (or a right way, since there's usually more than one), and (b) that doesn't work for second-order skills like designing a good set of tests for a particular function.
- In practice, I believe the majority of students wouldn't do the exercises unless the web site forced them to. I know I wouldn't, even when presented with the research behind this recommendation. The only way to make people do them would be to lock them out of some site content until they had successfully completed exercises on prerequisite material, which (a) conflicts with our open license, and (b) makes the site less useful to people who are really just looking for a solution to a specific problem that's right in front of them.
3. Combine graphics with verbal descriptions. The word "verbal" is important here. Research by Richard Mayer and others has shown that if you present images, text, and audio simultaneously, learning actually goes down, because the brain's two processing centers (linguistic and visual) are having two handle two streams of data each in order to recognize words in text and synchronize them with the audio and images. That said, we have to provide textual captions and transcripts for people with visual disabilities, people whose English comprehension isn't strong, and people who prefer to learn from the printed page. What we really need is tools that do a better job of disentangling different content streams (video, audio, narration, and diagrams); they are starting to appear, but the 21st Century replacement for PowerPoint that I really want doesn't exist yet, and we don't have the resources to create it.
4. Connect and integrate abstract and concrete representations of concepts. We certainly try to. I don't know how to gauge whether we succeed.
5a. Use pre-questions to introduce a new topic. The report recommends giving students a few minutes to tackle a problem on their own with what they already know to set the scene for introducing a new concept. We don't do this per se, though some lectures (like the ones on regular expressions and Make) do use a problem-led approach. Again, I doubt students working on their own would actually take the time to do the exercise unless we forced them to, but perhaps we could show them someone going up a couple of blind alleys (e.g., trying to parse text with substring calls instead of regular expressions) before introducing the "right" solution to the problem?
5b. Use quizzes to re-expose students to key content. I used to hate cumulative midterms and final exams because they forced me to dredge up things I hadn't used in weeks. Weeks! Now, as a teacher, I think they're great, and the evidence showing they reinforce learning is strong. But while we could make some sort of cumulative self-test available, most students wouldn't benefit come back to our site to do it, and so wouldn't benefit. Even if they did come back, we have no way to give them a meaningful assessment of their performance: the richer the question, the more ways to answer it there are, and the less useful automated marking would be.
6a. Teach students how to use delayed judgments of learning to identify content that needs further study. We can't do it. All we can do is tell them that they'll learn more if they come back and review things later to see how much they've actually understood, but (a) they've heard that before, and (b) if they're coming to our material for help with a specific problem that's in front of them right now, they're unlikely to make time in a month to review what they learned.
6b. Use tests and quizzes to identify content that needs to be learned. This recommendation is meant to be read from the student's point of view: they should use tests to figure out what they don't understand so that they can focus their review time more effectively. Again, we can create self-text exercises, but we have no practical way to give students useful feedback on any but the simplest.
7. Ask deep explanatory questions. The recommendation continues, "These questions enable students to respond with explanations..." Once again, we're limited by the fact that we can't assess anything except drill-level skills.
Our overall score is pretty poor: of the nine specific practices, we only do three or four. The others stumble over two issues:
- Most students won't voluntarily revisit material (they're too busy solving their next problem).
- There's no practical way to assess how well they're doing on anything except simple drill exercises.
The second worries me more. As I wrote several weeks ago:
To paraphrase Tolstoy, successful learners are all alike; every unsuccessful learner is unsuccessful in their own way. There's only one correct mental model of how regular expressions work, but there are dozens or hundreds of ways to misunderstand them, and each one requires a different corrective explanation. What's worse, as we shift from knowing that to knowing how—from memorizing the multiplication table to solving rope-and-pulley dynamics problems or using the Unix shell—the space of possible misconceptions grows very, very quickly, and with it, the difficulty of diagnosing and correcting misunderstandings.
Some research has found that crowdsourcing assessment among peers can be just as effective as having someone more knowledgeable do the grading. We don't know if that works for programming, though, and even if it does, will enough people voluntarily give feedback on other people's work (promptly enough) to be useful? Locking down content until they have will just drive people away rather than helping them learn. Solving the "meaningful feedback" problem is, in my opinion, one of the biggest challenges open online learning faces; I'd welcome your thoughts on where to start.