Monday, October 13, 2008

The NACAC Report on Standardized Testing in College Admissions

The National Association for College Admission Counseling (NACAC) recently released a major report on the use and misuse of standardized tests in college admissions. The report has gotten a lot of press, including an editorial in the New York Times.

The major tests used nationwide for college admissions are the SAT and the ACT. I was dismayed by the extent to which the NACAC report treats these two tests as interchangeable. The two tests differ in a number of ways - for instance, in their approach to math. The ACT math test measures a specific set of math skills and techniques sampled from the early high school curriculum. The SAT math test, on the other hand, demands more of the kind of mental gymnastics we commonly call "reasoning."

(I think psychometricians agree with me that the two tests "measure different constructs." But honestly, all you have to do is work some practice tests and you'll see what I mean.)

The difference between the two exams is occasionally recognized in policy: the U.S. Department of Education has allowed Illinois to use the ACT for accountability purposes under NCLB (the test was tweaked to align better to Illinois's state standards), whereas to my knowledge the SAT has not been approved for state use. Even FairTest, no friend of either exam, admits that the ACT is more closely aligned with high school curricula than the SAT.

The NACAC report nevertheless treats the two exams as identical. Here's how the Executive Summary breaks down:
Number of times the acronym ACT appears in the Executive Summary in phrases like, "the ACT and the SAT": 13
Number of times the acronym ACT appears outside of such phrases: 0
Now let's compare this treatment with the research coverage in the bibliography of the report:
Number of titles in the bibliography: 82
Number of titles in the bibliography mentioning the acronym ACT: 3
Number of titles in the bibliography mentioning the SAT, PSAT, ETS, or College Board: 41
Half of the bibliography is specifically about the SAT and related programs; only 4% of the bibliography is specifically about the ACT. The heavy weighting of the evidence base towards the SAT does not seem to justify the drawing of identical conclusions about the two exams.

By the way, one of the three specific references to the ACT in the bibliography is a document entitled, "NACAC Commission on the Use of Standardized Tests: ACT's Response and Recommendations Regarding the Critical Issue Areas Identified by the Commission." Oddly, the author of this response is misidentified in the bibliography as NACAC itself. But in case anybody wants to read ACT's reply, the document can be found here. ACT actually affirms many of the report's recommendations. I have not seen this fact noted in press coverage of the commission's report.

There is a revealing article by Scott Jaschik at Inside Higher Ed about the NACAC report and the association's meeting in Seattle. Jaschik refers to the report as "the SAT report"; perhaps he read the bibliography too. At one point he writes of the goings-on at the association meeting:
Others who spoke at the forum and elsewhere at the meeting generally agreed. Some focused broadly on the SAT, while others had specific complaints - fees charged by the College Board, a new College Board policy making it easier for students to take the SAT repeatedly without reporting that to colleges, lack of oversight of the College Board. (While the NACAC commission’s recommendations were put forth to apply equally to the SAT and ACT, most of the anti-testing rhetoric at the meeting was directed at the SAT and the College Board, not the ACT.)
***

The following passage in Jaschik's article caught my eye:
To applause, [NACAC member Susan Tree] cited "fundamental flaws in a test that has become one that continues to correlate more highly with family income and educational background than academic promise."
The association between poverty and test scores has been much-studied and is not in doubt. What's more, this is not simply a case of correlation without causation. Believe me: it's causation. Poverty causes low test scores. But what people forget, or maybe don't notice, or just skip past, is that this is not a case of direct causation. Think like a physicist with me here for a moment. How are test scores generated? By scanning bubble sheets. So what, physically, are the direct causes of very low test scores? I can think of two.

(1) A very bad scanner malfunction.

(2) A lot of wrong answers.

Now, I really don't think the problem is that poor school districts are ending up with all the crappy scanners. Although we maybe should look into that. After all, they get most of the crappy other stuff. But pending the outcome of that investigation, I think we must believe that what's leading to the low test scores is that poor children are offering a lot of wrong answers.

And why is it the case that poor children give so many wrong answers?

The testing literature details many factors that lead to wrong answers. Test anxiety would be one example. On an individual level, such an explanation makes a lot of sense. We all remember times when we flubbed a test because of anxiety or a bad night's sleep. But when we're talking at the aggregate level, these explanations suffer from an order-of-magnitude problem. In our poorest districts, vast numbers of students score very near rock bottom. It would take a perfect storm of high anxiety, low motivation, and misaligned bubble sheets, not to mention a citywide spate of fire alarms and barking dogs on the day of the test, to explain a phenomenon of this magnitude. No. My inclination is to stick to an obvious and simpleminded explanation. Poor children are giving lots of wrong answers because they don't know very many of the right ones.

And why don't poor children know very many of the right answers?

Well. That's the question, isn't it.

The academic underperformance of poor children is one of the most persistent and serious failures of American public education. I don't know all of the mechanisms that contribute to the problem, any more than anyone else does. But I do know that if the tests go away, then we are flying blind. Last year, 12% of African Americans taking the ACT scored high enough to indicate likely success in college math; whites were four times as likely to score at college-ready levels. (See here, Figure 3.) If the ACT goes away, then so does this damning statistic; and so also disappears our ability to chart growth in response to interventions.

When progressives argue that the tests have to go away because of the way they correlate with poverty, they are cruelly wrong. The time to drop the tests is when they no longer correlate with student poverty.

No comments: