My instinct when I see an article or blog posting that slams SED is to come to their (its?) defense. Granted, if an argument is sound, I'll agree but when it's not I feel compelled to say something. I think it stems from the fact that I used to work for a school improvement team that fell under the SED umbrella. As a result, when I hear "SED", I think of Dawn, Elizabeth, and other educators I've worked with or heard talk and present about the assessments and standards. I truly believe that the vast majority of educators who make up SED have the best interests of schools and children in mind when they go to work. The challenges they face are numerous - one of which is how the public responds to their actions.
Yesterday, complaints about scoring on the math assessment bubbled up in my RSS Reader. Why now? Why this year? We've been using a holistic rubric for years but this year, The Post publishes an article with the frustrating headline of "NY passes students who get wrong answers on tests". Let's dismantle that, shall we? (Not hyperlinked by choice.)
1. You do not pass or fail the 3-8 Math NYS Assessments*. It sounds counter-intuitive, but the assessments are designed as large-scale programmatic assessments - not culminating exams. In other words, the assessments are a tool for schools and the state to determine if students are meeting certain performance indicators from our state standards. Consequences for performance belong to the school and district, not the student. Compare this to the Regents. A student scores a Level 2 on a 3-8 assessment, they're referred to intervention services and may or may not receive them based on a review of their classroom, etc. If you get a 3, a magic passing fairy doesn't suddenly appear and grant you admission to the next grade level. A student scores a 54 on a Regents? They fail the exam. Two different types of assessments. (Analogy alert: it's like taking a cholesterol test to find out how good your blood pressure is. Different questions and purposes, different assessment measures.)
2. NYS 3-8 Math Assessments contains short and long-response questions. If one of the multiple choice questions is 2+2 (don't worry, they're not that simple), and you pick 5, you'll get it wrong. So no. NYS is not giving credit for wrong answers. If on a long response, you demonstrate you can correctly do the math - i.e. set up the problem, determine which variables to use, show you understand the concept - but make a computational error, wouldn't you want a child to get partial credit? A student is getting credit for what they do know, and losing points for their mistakes. If an argument is that the "Real World" doesn't work like that, no one is claiming the state assessments are the "Real World." Nor, are they claiming the tests measure creativity, senses of humor, or even if a child is a "good student". They are asking: on this day, at this time, can they students of this school demonstrate mastery of these performance indicators?
3. "This is rocket science." David Abrams, Director of Assessment for NYS. There is a field of study called psychometrics - the design and study of measures in the social sciences, including education. It's a fascinating field and sets the rules and standards for test design. Perhaps I am skewed by my personal connection to SED but I have to ask: Does it make sense that NYS would allow our students to take an assessment that didn't meet the requirements of quality assessments? Does it make sense that NYS would give "partial credit" if there wasn't a sound reason for it? I recommend reading the technical reports about the assessments to learn more.
Yes, there are issues with high stakes testing in the US. I support and follow FairTest. I think we've lost sight of what "data" means and are focusing on numbers to the detriment of multiple measures. I think we as a field have a lot of unanswered - and unasked - questions about standardized testing. All of that being said, this article frustrates me and does not add to the collective conversation around education. The headline was probably designed to get hits. And did it ever.
Here's the hard part for me. Self-reflection. Am I off-base? This argument isn't about how the scores are used, or how the pressure teachers feel around the tests, rather just this narrow band about the irresponsibility of publishing an article that doesn't even begin to address the complexity of a large-scale assessment system. Grumble, grumble.
* Yes. In NYC, a student can fail the assessments. That is a local decision and not how the tests were designed to be used.
1 comment:
Just finished reading the Post article on the scoring of the math tests when I saw your very timely post!!
Having led scoring in math - we know that it is important that students understand the mathematical concepts underlying the question. Thus - the will get partial credit if they set up the problem but have a calculation error. Since the goal of math isn't always one right answer but understanding how (and how many ways) to get an answer, this makes sense. Unless you only want to bash a test to bash a test.
Thank you for presenting "the rest of the story."
Post a Comment