By Kim Miklusak
I am currently taking an assessment course to finish up my ELL endorsement, and our course textbook has me thinking about overall validity of assessments that we give in our subjects--not only with ELL students, but with all students.
Specifically I have been reflecting on three major areas:
1. Do our assessments actually assess what we want them to, or do other aspects get in the way? For example, do students struggle with vocabulary in our directions or questions? Sometimes students cannot demonstrate what they know accurately because they are held back by not only the words we use, but also our sentence structure and writing style. To what degree do we balance words we feel students "should" know with making it as clear as possible for them to demonstrate what they do know?
2. How are our assessments visually arranged? Multiple choice answers, for example, are best read vertically, but frequently to save paper, we may arrange them horizontally or some other order. At other times it may be hard for a student to tell where one section ends and another begins. Are our fonts and font sizes clear enough? Are sections spaced well? All of these factors make affect validity.
3. Are our directions clear enough? For example, when appropriate, are we telling students how many sentences they should write or what types of words they should use in their responses? If we are not getting back the information we want from an assessment, if students don't know what they're being assessed on and what to expect, then perhaps we are not clear in what we are asking for in the first place.
I'd be very interested in doing lesson studies with people on assessments. I think this also becomes increasingly important as some of our assessments move digital--what does that change for validity of responses! Stop down to the CollabLab if you're interested in this or other lesson study ideas.