Monday, June 10, 2013

The SAT-ACT Score Map

 photo sat-actpartcont.gif
(Note: Following this post, I shall focus on finishing the MAOA bibliography probably until it is up-to-date and until I can better quantify its data.)

In my last post on college-entrance exams, I left incomplete the task of properly controlling for test participation a state map of combined SAT and ACT scores. I had already explored group average SAT gaps by race and gender and SAT score distributions. Finally, I am posting above what I consider the definitive state map, which is properly controlled for test type and state participation levels.

The map demonstrates my contention that American demographic changes contribute to a North-South educational divide. Detailed mapping of potential academic decline can help inform discussion of policies like “immigration reform,” help extrapolate future global competitiveness of the American workforce, and delineate regional economic fault lines. In the explanation that follows, I compare the effects of state participation in the SAT and ACT and race with regression analysis. Then, I shall review an important study on the relative importance of these scores and what might augment or replace them.

Testing associations publish a standardized table to convert between ACT and SAT scores. The primary table converts between the composite ACT score and the combined SAT mathematics and critical-reading (formerly verbal) score. (I divided the scores in half for purposes of comparison with SAT subtest scores.) A separate table converts between the newer SAT writing score and the score for the optional ACT writing exam. However, I shall neglect the writing scores data at this time, but the previous post maps raw SAT writing scores.

My last attempt at ACT-to-SAT score conversion amounted to a crude estimate that only accounted for the highest and lowest ACT scores with a line drawn between for all others. The tests follow different scales with several possible SAT scores coinciding with almost all ACT scores. Therefore, I created a new formula based on linear regression of the plot of average SAT scores for each ACT increment. Since state ACT averages tend to hover around 21, this graph illuminates how my previous formula unfairly underestimated states that emphasize the ACT over the SAT.

As states increasingly have required the ACT for all high-school graduates, their average scores have declined. Plotting below each state’s yearly ACT and SAT score since 1998 by participation level confirms the association. All associations achieve statistical significance (P=9.54 x 10-27 for the ACT, P=3.75 x 10-33 for the weighted SAT-ACT scores) with the SAT, alone, achieving the greatest significance of any tested relationship in this entire effort (P=5.4 x 10-245) and the largest coefficient of determination (0.769). Midwestern states that strongly emphasize the ACT achieved impressive average SAT scores and seem to have an outsized impact on this finding. The combined SAT-ACT participation rates are out of a possible 200%, which would require all high-school graduates to take the SAT and the ACT.

The comparison of score maps to demographic trends, which I presented with a map of the percentage of SAT examinees who are white or Asian, fits the familiar national racial group mean gaps. Simple linear regression better quantifies the effect of race, and multiple linear regression can tease apart the effect of state participation levels. Asians have the highest scores, but I lumped whites with Asians because Asians are a relatively small group. I would expect other racial gaps to be too confounding to separate Asians from the rest. The distinction I drew might seem arbitrary, but many institutions separate data on Asians and whites from that of “underrepresented minorities.” The graphs of scores by racial proportion appear to show that these are linear associations, all of which are significant (P=3.51 x 10-53 for the SAT, P=1.33 x 10-41 for the ACT, and P=1.48 x 10-40 for the tests combined). Multiple regression shows that all associations remain significant. For the combined SAT-ACT score, participation had a P-value of 3.35 x 10-22, and race had a P-value of 1.29 x 10-29. The multiple regression model set score equal to 986 – 0.782 x participation (as a whole number) + 97.5 x the percentage white or Asian.

The residuals from subtracting this model from the raw data fit a Gaussian distribution, as expected. So, I recalculated the SAT-ACT composite scores by adding the residuals back to the model under the assumption of 100% (out of a possible 200%) participation.

While the score map might not appear identical to the map of demographic trends, one can make out a North-South gradient, and state efforts to adopt test requirements seem well-controlled. Further analysis could use ANCOVA for income categories or compare racial gaps within states. Composite writing scores could prove useful, despite the shorter timeframe for the SAT and the optional nature of the ACT written exam. In fact, the states of Texas, Nevada, and Florida might not seem to perform so badly in this map, given their diversity, but their raw SAT writing scores were especially low. Then again, immigration weighing down English writing skills could resolve with acculturation.

 photo sat-actpartcont.gif Photobucket

Now, I wish to review a 2009 study on the relative relevance of SAT and ACT scores. Schmitt et al compared the predictive value of those scores to high-school grades and twelve “noncognitive predictors”: knowledge, curiosity, adaptability, perseverance, ethics (“not cheating”), career orientation, healthy behaviors, interpersonal skills, “leadership,” community volunteer activities, “artistic and cultural appreciation,” and “appreciation for diversity” (“e.g. by culture, ethnicity, religion, or gender”). As the US Supreme Court revisits the issue of Affirmative Action in college admissions, universities might apply such predictors to lessen the influence of standardized tests. None of the “noncognitive predictors” could predict college grades even half as well as either high-school grades or the SAT/ACT scores, which had correlations of 0.531 and 0.539, respectively. Knowledge came closest, but I think knowledge is cognitive. Career orientation actually was alone in its statistically significant negative association with college grade-point average. The authors offered as their only explanatory hypothesis the poor performance of African Americans “for whom career mobility and a career orientation was a major reason for college attendance.” Indeed, career orientation was the strongest advantage for African Americans, who barely scored higher than whites on “appreciation for diversity.” Their only other advantage was perseverance. High-school grades underestimated African-American college grades, but not as much as SAT/ACT scores overestimated. Adding the “noncognitive” criteria to potential admissions selection would lower college grades, in general, but raise African-American and Hispanic-American admissions at the 15% most exclusive universities, at the expense of white and especially Asian-American applicants. However, African-American college graduation rates would fall eight percentage points at such institutions. SAT/ACT scores were significantly associated with higher college classroom absenteeism and lower “organizational citizenship behavior,” with which “appreciation for diversity” had a significant positive association. In other words, the more intelligent students were less inclined to go to all lectures, promote “the university to outsiders,” defend “it against criticism,” and participate “in student government or other clubs” to make the university “a better place.” The authors did not conclude that those are relatively unintelligent behaviors.

Schmitt N, Keeney J, Oswald FL, Pleskac TJ, Billington AQ, Sinha R, & Zorzie M (2009). Prediction of 4-year college student performance using cognitive and noncognitive predictors and the impact on demographic status of admitted students. The Journal of applied psychology, 94 (6), 1479-97 PMID: 19916657


Amos said...

This is cool!

Anonymous said...

The reason for the limited validity of college entrance exams, and also the reason why American MDs aren't very smart (but extremely pushy), is that unlike EVERY other country on earth the US (and Canada) don't use cumulative exams to determine the quality of the UG degree.

BTW, I'm one of Herr Doktor Hsu's subjects.

Anonymous said...

I remember the TAs were warned about the premeds. "They won't let you give them less than an A."

God what a shitty country.

Anonymous said...

Indeed, career orientation was the strongest advantage for African Americans, who barely scored higher than whites on “appreciation for diversity.”

Their only other advantage was perseverance.

High-school grades underestimated African-American college grades, but not as much as SAT/ACT scores overestimated.

My first time here...can you please
explain the above to me?