Usability Testing and Instructional Design
This post originally appeared on the Software Carpentry website.
This is a story in several parts.
1. From Guido van Rossum's intermittently-updated blog about the history and design of Python:
Python's first and foremost influence was ABC, a language designed in the early 1980s [that] was meant to be a teaching language, a replacement for BASIC, and a language and environment for personal computing. It was designed by first doing a task analysis of...programming...and then doing several iterations that included serious user testing... ABC's authors did invent the use of the colon that separates the lead-in clause from the indented block. After early user testing without the colon, it was discovered that the meaning of the indentation was unclear to beginners being taught the first steps of programming. The addition of the colon clarified it significantly: the colon somehow draws attention to what follows and ties the phrases before and after it together in just the right way.
This story and others like it were something of a revelation to me when I first encountered Python in the late 1990s. Usability testing of programming languages? Huh—why isn't everyone doing that?
2. So when an enriched syntax for loops was proposed in 2000, I conducted a little experiment:
Given the statement:for x in [10, 20, 30]; y in [1, 2]: print x+ywould you expect to see:
- 'x' and 'y' move forward at the same rate:
11 22- 'y' go through the second list once for each value of 'x':
11 12 21 22 31 32- an error message because the two lists are not the same length?
All 11 of the people I tested voted for 'B', which is not what the designer of this syntax had intended it to mean. I did a slightly larger experiment a few days later to compare a few other syntax proposals, and while it was both fun and informative, the practice never caught on.
3. A few years later, I discovered the Media Computation work of Barbara Ericson and Mark Guzdial at Georgia Tech. They weren't interested in syntactic details; they were tackling the much larger issue of retention:
- What and how should we teach to get more people into computing and keep them there (particularly from underrepresented groups like women and non-whites/non-Asians)?
- What and how should we teach so that the people who do stick around remember more of what we've taught?
They found that both kinds of retention could be improved by using a media-first approach to computing, i.e., by using examples like resizing images, red-eye removal, and sound editing right from the start. There are many reasons—it's more immediately useful than finding primes, more fun than sorting strings, and the visual output is often easier to debug—but what really mattered was that they had evidence to back up their teaching strategy. They turned their findings into a series of textbooks, and my colleagues at the University of Toronto and I borrowed many of their ideas for our introductory Python book.
4. We also borrowed some code, or at least its API. Ericson and Guzdial realized early on that novices needed a different kind of toolkit than experienced programmers. For novices, a picture was the clay out of which they would shape their understanding of what programming was. They didn't need high-performance edge detection operations; they needed a simple, single-step way to loop over the picture's pixels. The Python Imaging Library didn't cater to this kind of thing because it's actually not the "right" way to do image processing, so the Media Computation team built a simpler (but lower-performance) library in Jython to keep simple things simple. We used a C-Python version of this called PyGraphics, which included some simple audio manipulation functions as well.
Which brings us to the point of this story. We currently teach Python in a very traditional order: arithmetic, assignment, lists, loops, conditionals, and functions are introduced in more or less that order. We also teach it using very traditional examples: the values we push around are numbers and strings, and the I/O we do is mostly readline-in-a-loop. Given what the folks at Georgia Tech have discovered, and the speed of modern machines, I'd really like to switch that up and try an images-first approach (particularly given how easy the IPython Notebook makes it to display images alongside code). However, I don't want to have to maintain an image manipulation library, no matter how small, or require learners to download and install anything more than they already need to.
But the point is, it's premature to worry about either issue until we know whether this approach actually works any better than what we're doing right now. If we tile images instead of cutting columns out of CSV files, do scientists learn more, faster, about how loops and conditionals and assignment and call stacks work? Do they remember more two weeks or two months later? Do they do more with what they've learned, and if so, does it actually help them do more science? I believe some of these questions can be answered, though the answers may not be easy to build, and as I've said before, if we're going to teach scientists, we damn well ought to act like scientists ourselves.