Who Wants To Build a Faded Example Tool for the IPython Notebook?

This post originally appeared on the Software Carpentry website.

While I'm asking people to write code for me, here's a small addition to the IPython Notebook that someone should be able to knock off in an hour. The starting points are cognitive load theory and the idea of faded examples: long story short, one good way to teach is to show learners one example in detail, then ask them to fill in the ever-larger blanks in a series of similar partly-worked-out examples. Here's one that Ethan White did for our online study group:

def get_word_lengths(words):
    word_lengths = []
    for item in words:
        word_lengths.append(len(item))
    return word_lengths

This function creates and returns a list containing the lengths of a series of character strings. The list of character strings is passed in as the argument words, and the local variable word_lengths is initialized to an empty list. The loop iterates over each character string in the list. Each loop iteration determines the length of a single character string using the function len() and then appends that length to the list word_lengths. The final word_lengths list is returned by the function. An example of using it is:

>>> print word_lengths(['hello', 'world'])
[2, 2]

Given that, we would ask learners to fill in the blanks to make this verison work:

def word_lengths(words):
    word_lengths = ____
    for item in range(len(____)):
        word_lengths.append(len(____))
    return word_lengths

and work up to:

 def word_lengths(words):
    return [____ for ____ in ____]

I'd like to be able to put exercises like this into the IPython Notebook. In particular, I'd like a two-by-two display:

code with blanks (read-only) user's code (editable) [reset]
expected output (read-only) actual output (colorized) [run]

The column on the left is read-only. Whenever the user clicks the "reset" button, the code with blanks is copied over to the right column. Whenever the user clicks "run", the code is re-run, and the actual output is compared to the expected output (and if it's textual, differences are colorized to draw the eye).

I think this is "just" a bit of Javascript—anyone want to take a crack at it and let me know?

Dialogue & Discussion

Comments must follow our Code of Conduct.

Edit this page on Github