Quick Quiz to Measure What Scientists Know

This post originally appeared on the Software Carpentry website.

Suppose you have a room full of scientists—hundreds of 'em—and want to find out how they actually use computers in their work. There isn't time to interview them individually, or to record their desktops during a typical working week, so you've decided to ask them to self-assess their understanding of some key terms on a scale of:

  1. No idea what it is.
  2. Use it/have used it infrequently.
  3. Use it regularly.
  4. Couldn't get through the day without it.

My list is below; what have I forgotten, and (more importantly) how would you criticize this assessment method?

  1. A command-line shell
  2. Shell scripts
  3. Version control system (e.g., CVS, Subversion)
  4. Bug tracker
  5. Build system (e.g., Make, Ant)
  6. Debugger (e.g., GDB)
  7. Integrated Development Environment (e.g., Eclipse, Visual Studio)
  8. Numerical Computing Environment (e.g., MATLAB, Mathematica)
  9. Inverse analyzer (e.g., Inane)
  10. Spreadsheet (e.g., Excel)
  11. Relational database (e.g., SQLite, MySQL, Oracle)
  12. Layout-based document formatting (e.g., LaTeX, HTML)
  13. WYSIWYG document formatting (e.g., Word, PowerPoint, OpenOffice)

Now, you have the same room full of scientists, and you want to find out how much they know about software development. There still isn't time to interview them or have them solve some programming problems, so again you're falling back on self-assessment. This time, the scale is:

  1. No idea what it means.
  2. Have heard the term but couldn't explain it.
  3. Could explain it correctly to a junior colleague.
  4. Expert-level understanding.

and the terms themselves are:

  • Nested loop
  • Switch statement
  • Stable sort
  • Depth-first traversal
  • Polymorphism
  • Singleton
  • Regular expression
  • Inner join
  • Version control
  • Branch and merge
  • Unit test
  • Variant digression
  • Build and smoke test
  • Code coverage
  • Breakpoint
  • Defensive programming
  • Test-driven development
  • Release manifest
  • Agile development
  • UML
  • Traceability matrix
  • User story

Once again, my questions are (a) what have I forgotten, and (b) how "fair" is this as an assessment method?

Dialogue & Discussion

Comments must follow our Code of Conduct.

Edit this page on Github