The Real Hard Work
This post originally appeared on the Software Carpentry website.
I spent a couple of thought-provoking hours at Codecademy's office in New York on Thursday, during which my host said, "It's so different from Silicon Valley, where a lack of experience is considered an asset." The next day, while teaching at Columbia, I used one of my favorite sayings, "A week of hard work can sometimes save you an hour of thought," and someone in the audience piped up, "Or an hour of reading." The two comments helped crystallize something that I've been thinking about for several months now about what's hard for creative people to do, and what implications that has.
Most people don't like working long hours, or difficult problems. Creative people, on the other hand, thrive on both: they get an endorphin high from losing themselves in something gnarly for days at a time. What they find hard is boredom: if you give them something easy and repetitive, they'll complicate it to make it more enjoyable, find a way to get out of it, or invent a reason why it doesn't need to be done in the first place. And, being creative, they're very good at all three.
In particular, given a choice between working around the clock for a month to build a beginner-friendly learn-to-program web site, or sitting down for three days and wading through a dozen papers and reports that describe what people have tried in the past, and how well it worked, most creative people choose to hack. Reading papers is dull, dull, dull, especially when the first nine don't actually say anything relevant (but you couldn't know that without reading them), and the gem in the tenth is buried in a sawdust pile of dry academic prose. So creative people create excuses:
- "We're not teaching computer scientists." A lot of research has looked at graphic designers, disadvantaged grade-school kids, and people from many other walks of life.
- "We're not teaching computer science, we're teaching programming." So are a lot of researchers. (Ironically, that's one of the reasons their work tends not to be valued by "real" computer scientists.)
- "The web changes everything!" No it doesn't—it doesn't change the way brains learn. And anyway, how do you know what it's changed if you don't know what was there before?
- "But today's schools/teachers/whatever are broken!" That's a gross exaggeration, but even if it wasn't, shouldn't a doctor learn something about an ailment before trying to cure it?
- "We're too busy." This argument is partly valid: you can't evaluate other people's experience until you have some experience of your own. But looking back at my own projects, I've always kept hacking long past the point when I should have paused for a while to find out what other people had done.
I told the students at Columbia that one of the things that distinguishes serious programmers from amateurs and dilettantes is that serious programmers write tests. The politicians who get policies implemented are the ones who master their briefs, the lawyers who win cases are the ones who read the whole contract, and so on. That's the real hard work for people like us, and as Bernd Heinrich said of marathon runners, "The will to win is nothing without the will to prepare."
So here's my simplified Audrey Test for tech types interested in education:
- Have you read the last two years of Mark Guzdial's blog? He does great work, he writes well, and he understands that doing is as important as knowing.
- Have you read How Learning Works, which condenses decades of research and experience into 300 easy-to-read pages?
If the answers are "yes", I'll believe that you're willing to do the real hard work required to help other people learn.