The utility of a ranking system is expressly determined by two primary factors. The first could be best expressed as the requirement that one must compare apples to apples. The second is the appropriate choice of a metric to represent the quality being ranked.
What single metric represents a measure of a good school? Some metrics may be dismissed out of hand. For example, SAT performance, one can argue, is significantly influenced by parental income. The same could be said of ACT performance.
What about Advanced Placement (AP), International Baccalaureate (IB), or a combination of the two? Some argue that by using the number of AP tests administered in a school and not the number of tests passed, one factors out the influence of parental income. Furthermore, it is argued, AP and IB tests are developed and created and graded by external agencies, allowing one to eliminate any internal grading bias by the schools being ranked. If true, one could argue with equal equanimity that the number of SAT and ACT tests administered at a school are of similar value as a metric to measure good schools.
However, it is AP and IB in particular that has taken hold of the national psyche, at least in the national capital area, as a tangible measure of a good school. It has been in large part due to the promotion of its use in a popular ranking system invented by an education reporter at The Washington Post, Jay Mathews.
The methodology, one could argue, is a triumph of reductionism and syllogistic reasoning at its zenith. Mathews accomplishes this feat by eschewing statistical analysis in favor of simple arithmetic. The index formula is a simple ratio, says Mathews, who takes the sum of the number of Advanced Placement, International Baccalaureate and Advanced International Certificate of Education tests given at a school in a given year, and divides it by the number of graduates that year to arrive at an index by which he ranks schools.
The idea that a higher ratio is associated with a better school may be dismissed at first glance by noting the fact that any school with a dismally low graduation rate is apt to perform better. For example, consider two schools with 1000 seniors each. The first school at which 1000 AP tests were administered and all one thousand students graduated would have an index of 1 (1000/1000). On the other hand, the other school which also administered 1000 AP tests but graduated only half its seniors, 500, would have an index of 2 (1000/500) and be rated a better performing school. No complex arguments required to show that the index, taken at face value, is flawed.
What better way to illustrate the flaws of the ranking system than by looking at the data released by Maryland’s largest school district? We too will take the reductionist syllogistic approach and assume that a good indicator school quality is the total number of AP tests administered at a high school in a given year, divided by the number of graduates in that year. Using the data provided by the district, the top ten schools in our ranking system are, in order from best to worst, are Poolesville, Churchill, Wootton, Walter Johnson, Quince Orchard, Whitman, Richard Montgomery, Bethesda-Chevy Chase, Northwest, and Montgomery Blair.
The graphic that accompanies this column includes all twenty-five high schools in the district.
How does our ranking compare to that of The Washington Post? The Post ranking for 2015 using 2014 data, from best to worst was Poolesville, Bethesda-Chevy Chase, Richard Montgomery, Churchill, Wootton, Walter Johnson, Quince Orchard, Whitman, Northwest, and Montgomery Blair. How did Bethesda-Chevy Chase and Richard Montgomery have a better ranking on the Post index? While all schools can and do administer AP exams, not all schools have IB programs. As a consequence, students at some schools have a bigger choice of tests—both AP and IB. Both Bethesda-Chevy Chase and Richard Montgomery are International Baccalaureate World Schools. Consequently, students at these two schools do take IB tests and also take AP tests giving rise to a larger numerator in the Post index and the resulting higher ranking.
It could be argued then that the Post index has an inherent inequity favoring schools that have a myriad of test options. Coupled with the fact that the index favors schools that graduate the least number of students for a given number of administered tests, it cannot be considered a useful metric for determining America’s best high schools.
We will look at the ranking system in greater detail in a future column.