In today's Sunday New York Times, Frank Bruni has excellent advice for how one should use (and not use) college rankings.
There are two schools of thought on rankings. Those of us who have looked into how they are created are usually siding with Mr. Bruni: that 1) the premise is flawed to begin with and 2) no one ranking is going to be definitive for all prospective students.
I've written previously on how a multirank system that could be customized by the user would be a better system that the static rankings we have now. There are a wide variety of rankings that all decide for the user what is important for them to look for in a college. Having had two children go through the college search process, I can attest that there is no one size fits all. As Mr. Bruni suggests, if one has to use rankings, then use several to illustrate different sides of a school. For instance, if community service is of interest, use the Washington Monthly rankings to get an idea of some of the schools that foster such interests.
But, as I am also on record pointing out, the rankings aren't really used as much by prospective students, but more often by college presidents, trustees, and alumni. The people creating the rankings know this, and some are not geared towards prospective students.
So refer to rankings if you need to. But also do your own homework. Rankings are like the old Cliffs Notes that would summarize plots of books for those students who didn't have the time or inclination to read the whole book. You can get the gist of what happened, but you miss the nuances in the prose of what distinguishes a good book from a great book. And don't we all want that great book?