Our conversation, to wit:
Roomie: How/why is it that our alma maters aren’t even in the top 50?
Me: Because they suck?
Roomie: Oh, ok…I’d agree.
The whole notion of ranking colleges is a curious subject, primarily because colleges and universities look at it with equal parts disdain (How can you quantify the unquantifiable?) and shameless exploitation (We’re better than Stanford! Take that, Cardinals!). Those institutions that do well on top-whatever lists whorishly brandish the distinction on marketing materials, while those that don’t grumble in the corner, deridingly raising their fists at the instrument’s validity.
This is all very interesting to me, given my recent ramblings on my potential dissertation topic, i.e., how we measure institutional effectiveness. In general, I think there’s been a substantial push to move the industry standard from looking at input measures (your SAT scores, the number of faculty with PhDs, the total square feet of library space—you know, stuff that says something about the quality of instruction you receive </sarcasm>) to outcome measures (grad rates, results of normed assessments, and even alumni/employer satisfaction). But, clearly there’s still a long way to go.
It’s not that higher education has its heart in the wrong place, although there are many an obsolete academic out there—think rotting wood—who devotedly hold on to “the way things work around here.” But I think there’s significant inadequacy in the current infrastructure for measuring institutional performance in ways that really matter and that can make a difference.
Since colleges and universities grew organically rather than as a planned enterprise, there are a lot of things that we’re experiencing today that are a direct result of the way higher education in this country developed. Case in point: the balance between public service and academic freedom. For example, there’s a significant push and pull between 1) the need for higher education institutions to meet local, state, and national needs through academic programs more directly related to workforce development; and 2) the freedom for institutions to teach whatever the heck they want. This was the debate that was sparked by the Yale Report of 1828 and that still goes on today.
My opinion: publications such as the U.S. News & World Report rankings have met public success because they’ve filled a void in the industry, namely the public’s need for more information about the performance of colleges and universities. And so while the rankings themselves may be painfully flawed, they’re the only ones that we have.
There are a few, though, that have started to look more deeply at institutional performance, such as The Washington Monthly’s annual college guide, which bases rankings on outcomes such as engagement and service. And, of course, the Princeton Review’s college ranking includes the most critical categories, including best party school (West Virginia University) and best school that educates dodge ball targets (Eugene Lang College of New School University).
Clearly, the market for such publications should be proof enough that the higher education industry isn’t doing enough to head off other entities at the pass when it comes to measuring and reporting institutional performance. As higher education administrators, I think it’s worthwhile to ask ourselves—as we’ve asked many times before—what are we doing (and not just announcing or planning to announce or thinking about announcing) to make sure that we lead these initiatives and are not caught with our pants down when the tides come crashing in?