A non-psychometrician named Ted Rueter repeats a lot of the anti-testing myths in his CollegeNews.org article condemning NCLB:
What's wrong with testing, testing, testing? Plenty. First, annual high-stakes testing impedes learning. It produces rote memorization and a "drill and grill" curriculum...
Because, as we all know, an educational curriculum that requires students to master the basics and learn facts via memorization before moving on to higher-order thinking has never been shown to be effective.
Also, high-stakes testing encourages school dropouts. In Massachusetts in 2003, almost twenty percent of high school seniors did not pass the Massachusetts Comprehensive Assessment to receive their high school diplomas--including 44 percent of the state's black seniors and half its Hispanic seniors.
Nice to see that the test alone is getting blamed here for MA's dropouts, and that we're assuming causation from a simple relationship. Tell me, what was MA's graduation rate before NCLB was implemented? Good luck finding a straightforward answer to that question, but this article notes that Massachusetts had a 73% graduation rate in 2000. If 80% of Massachusett's seniors are now passing the exit exam, doesn't that suggest that the test isn't what's holding everyone back?
The difference in graduation rates between whites and minorities is still a crime, of course, but it's disgusting to see that blamed on tests, as though no such difference existed before NCLB, and as if the test scores alone - and not the lack of knowledge behind them - are what is holding back minority students today.
The No Child Left Behind Act also restricts the curriculum. It produces a narrow focus on math and reading test scores. Schools desperate to improve their test scores are eliminating courses in art, music, speech, debate, home economics, industrial arts, history, social studies, and physical education--as well as recess.
Raise your hand if you think a school should focus on industrial arts when it can't teach students how to read, write, and do simple arithmatic.
In addition, the Act narrows the range of performance-based accountability. Who says that a standardized test is the only way to measure student achievement? What about portfolios, exhibitions, essays, student-initiated projects, and teacher evaluations?
You know, it's touching the way that the uninformed place such great faith in those mystical "performance evaluations." I pontificated at length on this a couple of years back, and my comments still stand (and are still rarely addressed by standardized-testing critics):
One example of the schism between those who dream and those who produce in the world of educational reform is the the current fad for performance assessments (or portfolios). Those who tout these exams as an educational cure-all often have a mystical and unrealistic concept of them. They envision these exams as non-standardized, low-test-anxiety, touchy-feely, unbiased, multi-dimensional measures of "higher-level thinking" that don't require a lot of time to grade, yet are also perfectly reliable, perfectly valid, and inexpensive. These dreamers don't want to hear us when we tell them that these assessments require a great deal of funding to develop, lengthy amounts of time to administer and grade, and many controls in place to avoid rater bias.
Rater training is difficult work, and ratings must be done blind to avoid bias based on unrelated student qualities (such as race). Even with superb training, raters often disagree with one another or with the scoring rules, and the reliability of the scores is driven downward. The more qualified the rater, and the more training the rater receives, the more money they are paid.
Even if raters were perfect and cheap, developing a broad performance assessment is an extremely difficult task. If it's meant to measure something different from the multiple-choice exams, then what do we correlate the scores with to see what the test does measure? What type of items should be used? How do we quickly score open-ended items? How valid are short-answer items? What's the impact on certain subgroups if we suddenly switch item types? Do we move from one kind of test anxiety to another? And how are we supposed to combat test anxiety when certain activists keep insisting that our assessments are racially biased? Switching from an objective (multiple-choice) exam to a more subjective one increases the possibility of test bias. What if the test-score gap increases with these new assessments?
Back to Rueter's article, where we learn that firm parents are doing it all wrong:
Constant testing also increases pressure on young children. The Act calls for math and reading tests in third grade--when most students are eight years old. Putting pressure on young children runs counter to everything we know about the psychology of children and the psychology of learning...
The pressure to improve student test scores has also led to cheating.
So much for challenging children and filling up all those hungry, empty little heads with languages, manners, math, life skills, etc. I suppose we should wait until their synapses slow down before trying to put any educational pressure on them.
And as far as cheating goes, I can't improve on how I responded to similar blathering two years ago:
[The author] believes that high-stakes testing fails because the presence of high stakes encourages cheating. By that reasoning, all grades should be eliminated, because certainly students cheat in class, and all taxes should be eliminated, because taxpayers certainly like to cheat on their 1040s. Indeed, any strict set of rules which have ever been broken, or any set of standards circumvented, can be tossed out the window, by this logic.
Finally, we see that testing is just plain unhelpful:
Also, annual testing does nothing to improve schools and student performance. It focuses on punishment, negative labels, and threats.
The classic dodge - testing does nothing to improve education in and of itself, so we shouldn't use it, especially given that we slap the negative label of "failing" on those who, well, fail. The funny thing is that every psychometrician would agree that testing in and of itself doesn't fix matters - but good luck trying to determine whether a bad situation has gotten better without a pre- and post-assessment that is cheap, reliable, valid, easy to score, and easy to interpret.
We test schoolchildren, and schools, for the same reason that, no matter how much you practice learning to drive, you still have to take the driver's test to get a license. There has to be some way of assessing whether an educational method works.
This article isn't as hysterical as some, but for a nicely-spun and completely unsupported string of anti-testing statements, it's hard to beat. Wonder how many education majors will read this and take Ted Rueter's statements as gospel? One also wonders why, when there are other issues with NCLB that should be criticized, the critics keep focusing on the tests alone.Posted by kswygert at September 9, 2005 11:13 AM