The Newsweek article on the Spelling commission's recommendation for testing school's graduates to ascertain whether or not they've been educated used as a graphic a pencil and a multiple choice machine grading form (bubble sheet), indicating that the continuation of multiple choice testing would continue on forever.
No criticism of this method of examining students is heeded, ever.
It remains an article of faith that machine graded multiple choice examinations are the only unprejudiced method available for testing. No one remembers the criticism of MC testing by Banesh Hoffman many years ago, and no one notices that the internet makes testing modalities available today and into the future which didn't (or couldn't) exist in the paper and pencil era. With the continuing flack about "no child left behind" it seems imperative that some discussion of testing the tests for veracity of judgment be carried on, but everyone is silent on this issue.
It is clear (to me) that multiple choice questions are questions in both the subject matter and in reading. Worse, reading of the type needed for MC questions is specialized; its not the reading we do for prose or poetry, its unique.
Furthermore, the reasoning employed in a multiple choice environment is not one which carries over into normal activities of even the smartest and most creative. Guessing, including enlightened guessing, is encouraged. Many test tutors encourage employing reasoning schemes which have nothing to do with the logic of the problem actually being addressed.
And continuing in this vein, one notes that MC tests do not exist in the "real world", where answers need be constructed out of nothing, out of air and thoughts. What are called "constructed response" items, in which there are no hints whatsoever concerning the ultimate answer, are much closer to reality than MC responses, which are not meaningful either when they're right or when they're wrong.
What is the major advantage of paper and pencil multiple choice examinations is that the teacher needn't grade them. A machine can do that. Hail to the machine, and to the teacher who has figured out how to lower his/her work load.
We've known for years how to create "constructed response" numerical response items which are machine readable, and to the best of my knowledge (TTBOMK) these are only used in mathematical competition examinations. I've used them for years (although hand graded, since no machine existed on campus that could read and interpret the items properly) as a way to force students to answer in a single place in a single format. The scheme mirrors the "cgi-bin" Perl programs I've also used for years to allow students to answer "constructed response" items on the World Wide Web.
The main criticism of MC testing is that the "distractors" are intentionally chosen to mislead students, and this is contrary to all normal human practice in problem solving, and is actually harmful to students. When they've made a mistake, and find their (incorrect) error in the list of possible answers, it is just plain human nature to choose that (incorrect) answer and move on. Deceiving the student into this kind of mistake is cruel, unnatural, and not in the student's best interest!
No comments:
Post a Comment