Standardized Testing: What (Low) Scores Mean

An article on standardized testing (Revealed: The school board member who took standardized test, by Marion Brady) was published a few days ago on The Washington Post and raised my eyebrows, for it denounces a real scandal but, at the same time shows some bad reasoning that is worth analyzing here.

In short, an esteemed professional on the School Board of one of the largest school systems in the US took his state’s high-stakes standardized math and reading tests for 10th graders, and made his scores public.

How did he score?

“The math section had 60 questions. I knew the answers to none of them, but managed to guess ten out of the 60 correctly. On the reading test, I got 62% . In our system, that’s a “D”, and would get me a mandatory assignment to a double block of reading instruction.

He continued, “It seems to me something is seriously wrong. I have a bachelor of science degree, two masters degrees, and 15 credit hours toward a doctorate.

So, he got a very low score. Surprised? However, the conclusions and the logic used to get them are the real centerpiece here. Please, note that there is a logical point made here which is simply wrong: I have a science degree but failed a math test, ergo the math test must be wrong (because my science degree shows I have knowledge of math). This is a fallacy, people, one of those fallacies that math actually helps us avoid! Why? Because I may possibly not deserve my science degree!! I find disturbing that a grown-up, serious professional cannot answer any question of 10-grade math. Really disturbing. I am sure that his Korean or Finnish counterpart would very well, thanks.

But let’s go to the main points.

First, obviously both the article’s writer and the “person testing the tests” conclude that (at least those) standardized tests do not actually measure anything with accuracy.

*He said he understands why so many students who can actually read well do poorly on the FCAT.

“Many of the kids we label as poor readers are probably pretty good readers. Here’s why.

“On the FCAT, they are reading material they didn’t choose. They are given four possible answers and three out of the four are pretty good. One is the best answer but kids don’t get points for only a pretty good answer. They get zero points, the same for the absolute wrong answer. And then they are given an arbitrary time limit. Those are a number of reasons that I think the test has to be suspect.”

That’s one very good point. These tests are often too strict and offer only black-or-white kind of answer choices. They are punitive in principle: either the pupil is prepared *for the test* or she will fail. The pupil must show she is able to perform well (meaning, run during the test). I think we agree on this: these tests are bad: We should figure out a way to industrialize the testing of knowledge and skills in a more dignified way that truly assesses human potential.

The second point inferred is obviously, that these tests evaluate useless knowledge, such as -you guessed it- math.

“I have a wide circle of friends in various professions. Since taking the test, I’ve detailed its contents as best I can to many of them, particularly the math section, which does more than its share of shoving students in our system out of school and on to the street. Not a single one of them said that the math I described was necessary in their profession.

If the test-taking-and-failing person failed at math but has obtained graduate science degrees, doesn’t this all mean the math the test was trying to assess was actually useless? (This is actually a different question from the one posed at the beginning of this post).

Note that this could certainly be. My point though, is that it needs not necessarily be so. I mean, we need to undergo a full fledged math education for a number of reasons, of which usefulness is but one -and not the main. Abstracting is a cognitive skill best acquired with math, and often unconsciously. Reasoning in formal system is another. Logic, rational, rigorous thought. And then, sure, equations, percents, verbal problems, etc, up to Calculus and beyond. Thus, I consider one terrible myth the one propagated by this writer and his tester friend: First I seriously doubt the math in the test was not even remotely necessary in their profession. I am pretty sure of the exact opposite, in fact! Second, the “tester” success at his profession and graduate studies might have also been helped by his long-forgotten math: Perhaps, he may have forgotten some (surely not all!) tricky method, but likely he retained the meta-cognitive skills.

The point should be not about the usefulness of the math (or the subject area being evaluated in a test), but instead, of the tests being a bad way (when they are properly done) and a terrible way (when they are poorly prepared) of checking skills and knowledge in our students.

The test, especially in the US, is the heir of the Industrial Revolution and the idea that a huge numbers of students can be thus evaluated for alignment with one subject area or another. This is the crucial point we should try and fix.

So, instead of talking about the “need” of such subject testing (and study), which must be analyzed properly and holistically within a full curriculum, we should focus on changing the basic way we do assessment and the way it closes the doors to better educational opportunities to our students.

Enhanced by Zemanta

About Antonio Vantaggiato

Professor, web2.0 enthusiast, and didactic chef.
This entry was posted in education, myths and tagged , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *