Assessing Assessment
This post originally appeared on the Software Carpentry website.
Jason Williams and I met with two consultants at the University of Texas at the end of September to get feedback on Software Carpentry's post-workshop survey. They gave us detailed suggestions for improving six of the questions, and felt the rest were OK as they are. The feedback is given below; even without the whole questionnaire (which we will post shortly), we hope it's helpful.
Change "I learned valuable skills" to "I learned skills that I can use in my research" because valuable is too vague and subjective.
"The instructors/helpers were effective" is vague. Maybe add an "(e.g. knowledgeable, good communicators)" or make multiple specific questions.
There were a few suggestions/options for improving the question, "Before the workshop, did you feel any of the following topics were intimidating?"
Change to a sliding scale response to allow finer resolution between intimidating and not intimidating.
Change the response to a Likert scale and rephrase the question to something like, "For each topic, how well do you identify with the statement 'Before the workshop, this topic was very intimidating.'" Or, you could use a "describes my feelings" scale or agree-disagree scale.
Also, students may not be able to accurately self-report on pre-workshop attitude after taking the workshop. This question could be move to pre-assessment survey.
There were a few suggestions for, "After taking this workshop, which statement best reflects how you feel about learning more on the following subjects?"
Change questions to, "After taking the workshop, how interested are in you in learning more about the following topics?"
New responses could be something like 'uninterested/not relevant', 'uninterested/already know enough', 'slightly uninterested', 'neither uninterested nor interested', 'slightly interested', 'interested'.
For the second skills questions, we might be able to get at both what was covered in the workshop and what skills were gained by the student using the side-by-side question format. For example:
The question would be, "For the following tasks, please rate the quality of instruction and how confident you are in your ability to perform the task."
Scale for quality of instruction subset: poor-excellent or above-below average.
Scale for task performance subset: no chance to good chance or easy to difficult.
Change "sex" to "gender" for demographic questions.