From a recent article on Slate about how U.S. News ranks universities:

U.S. News changed the scores last year because a new team of editors and statisticians decided that the books had been cooked to ensure that Harvard, Yale, or Princeton (HYP) ended up on top.

Unacceptable! We count on the objectivity of the rankings of U.S. News to decide where to go to college!

So, last year, as U.S. News itself wrote, the magazine “brought [its] methodology into line with standard statistical procedure”. With these new rankings, Caltech shot up and HYP was displaced for the first time ever.

Yay for science!

But the credibility of rankings like these depends on two semiconflicting rules. First, the system must be complicated enough to seem scientific. And second, the results must match, more or less, people’s nonscientific prejudices. Last year’s rankings failed the second test.

No, no! Wait! That’s not how science works!

So, Morse was given back his job as director of data research, and the formula was juiced to put HYP back on top.

Wait! What? You pick the result you want and tweak the numbers until you get it?

The fact that the formulas had to be rearranged to get HYP back on top doesn’t mean that those three aren’t the best schools in the country, whatever that means.

No, but it means the whole thing is bullshit and you’d be better off going with your prejudices since the ranking is tweaked to validate your own preconceptions of quality.

I like how the article ends thou:

If the test of a mathematical formula’s validity is how closely the results it produces accord with pre-existing prejudices, then the formula adds nothing to the validity of the prejudice. It’s just for show. And if you fiddle constantly with the formula to produce the result you want, it’s not even good for that.

And that is why I tend to avoid looking at rankings or any kind of measure I don’t fully understand.