False precision in MBA rankings

I have finally submitted school data to the various MBA rankings surveys that we will participate in this year. Over the last three years, I have noticed how, with each passing iteration, the publications that run these rankings ask for more and more data points.

 

Take for example, the Business Week MBA survey. It is held every two years and asks more than 100 questions and that’s just for the MBA survey. In addition, I had to fill in another 30 odd data points for their General School survey.

 

Much of the data that is asked for has little relevance to a school such as Cambridge. For example, there is an entire matrix to be filled out about the different states that our US students come from, and which states in the US our alums work in after they graduate. This doesn’t make sense to a non-US school like us especially when we might only have 20 American students in each class. There are also several matrix tables of salary data that I had to fill in. BW asked for mean, median, low and high salaries for our graduates by job function, industry sector, and geography. There is a total of 200 cells to fill in for the salary data alone, and that just accounts for 5 of the 100 questions.

 

The statistically-minded among you will be asking yourself what algorithm could combine so much disparate data into a score that can be used to rank different schools. And the simple answer is that none of this information matters in the final analysis because Business Week does not use any of the data that the school provides. 45% of the Business Week ranking is determined by student survey responses, 45% from a survey of recruiters, and 10% from the number of publications, books and articles published by the school’s faculty.

 

If that’s the case, why collect all this data? Business Week needs to make a return on their investment in the rankings and parsing the school data in different ways can provide material for a stream of articles that will attract more and more visitors and hits on their website. In the runup to the publication of its rankings, Business Week has already posted several articles using the school data to rank top schools by salary, and even schools that have the highest percentage of students employed by specific companies. Data is also posted on a school’s profile page in Business Week’s MBA section.

 

I have no problem with this approach, and in this current age of online transparency, one could pose the question whether there is any problem with so much information other than a few overworked business school staff. My worry is about how this flood of data produces the illusion of certainty in an inherently uncertain and fuzzy world.

 

When all this school data is presented alongside rankings that don’t use the data, people might get the impression that the rankings are based on factual data and not look at the underlying data and methodology of the ranking survey. Everyone would benefit from learning the response rate from recruiters and students to see how reflective the results are. As anyone who has run a survey before, it would also be useful to know how the survey is constructed as survey design can play a very large part in determining responses.

 

But there is a deeper issue at play. Which is whether it makes sense at all to rank one school as being better than another in such a one-dimensional sense as rankings. John Kay, in his book Obliquity, draws a parallel with Peter Weir’s film Dead Poets Society where Robin Williams plays a teacher expected to teach poetry based on a text written by Dr J Evans Pritchard. Pritchard has a theory that the greatness of a poem is the product of its importance and its perfection. In one of the film’s most memorable scenes, Williams incites his class to tear off the pages from Pritchard’s book and gets them to truly appreciate the beauty of literature.

 

Kay makes the good point that it is not unreasonable to ask what are the characteristics of a great poem, but one is asking the wrong question whether Keats’s ‘On First Looking into Chapman’s Homer’ is a greater poem than Whitman’s ‘O Captain, My Captain.’ According to Kay, “the goals of education are known but the quest for clear prioritisation of the incommensurable components of education misconceived.”

 

While I am not advocating that everyone takes inspiration from Robin Williams and tears up the pages of the next MBA ranking, I would encourage people to ask what is the objective(s) of an MBA education (and it is still an education). That’s a more enriching discussion than why alums in the East Coast from one school has a higher median salary than alums in the Pacific Northwest from another school.

If you enjoyed this post, please consider leaving a comment or subscribing to the RSS feed to have future articles delivered to your feed reader.

15 Responses to False precision in MBA rankings

  1. Conrad, good points and I don’t think that anyone could argue with them – even the ranking authors admit there are problems! Candidates and employers realize that it doesn’t matter much if you come in 4th or 7th. However, the rankings are what they are, and the FT and BW rankings are highly visible so there can be no excuse for not scoring in the top group. I might be missing something but JBS is not even raked in the BW survey.

    • Yes, it was a surprise to us that we were not ranked on BW. BW told me that our response rate from employers was too low to be considered, which is a shame given that our employment stats, with 97% accepting an offer within 3 months of graduation and about 99% of job seekers receiving a job offer within that time frame, are one of the strongest in the survey.

      I noticed that many participating schools were not ranked because of low response rates. This is one point that I wish BW would change. Unlike other surveys, BW does not publish the required response rate before surveys open. In fact, I understand that for the student survey, the required response rate is the median of all the response rates, which means that the cutoff is a moving target while the survey is open.

      It is instructive that the response rate from recruiters was far lower than the total response rate for students (I am taking this from BW’s methodology page). In fact, I would be interested to know what constitutes a response from a recruiter. Recruiters are supposed to list 20 schools that they like or have on- or off-campus recruitment activities, but does a recruiter who only writes 2 or 3 count? And if so, how many data points are we talking about spread over how many schools?

      Ultimately, BW has to have a higher response rate from schools because 90% of their ranking relies on survey data. However, I wish that they could use some of the copious data that they ask schools for. For example, it surely makes sense to use placement data in the rankings rather than just recruiter surveys.

      I feel the most disappointment for our students who participated in the survey and who now can’t compare their results. Given the amount of work that schools and students put into these rankings, BW might consider at least fixing the required response rate before the surveys open.

      • I’m an applicant to JBS and was really disappointed to not see Judge anywhere in the rankings. Being that BW rankings are very visible as Jon points out, and being that a school must meet a threshold response rate from employers it seems that the admissions team should be on employers backs day and night to make sure the surveys are completed.

  2. Fair enough — in that case perhaps, considering linking MBA Admissions’ and the Director’s compensation to MBA rankings?

    • I am guessing that you will disagree with me but it seems that the world has been down this road before. Linking compensation to ever more complex formulae that people don’t understand but believe models real world outcomes. For the many changes in methodology, read VaR.

  3. I don’t think Cambridge will be making so much noise if they are ranked 1st, or higher than Oxford. Everyone knows that rankings are not perfect (what is perfect?) but they do more to inform potential students and employers than word-of-mouth. This is Businessweek we are talking about, not Tommy’s Blog MBA Rankings.

    Just admit Judge MBA is a young and currently average MBA and be humble and improve it. Trying to give excuses over this and that just makes Cambridge look small and petty. You should congratulate the other schools who did well instead.

    • I’d have to agree. A part of moving forward is admitting that there is an issue, and not trying to reframe the problem. If Cambridge wants to compete, it has to play the game, just like any other business school out there – and btw you don’t get bonus points if you say that you are somehow different and that rankings should not apply to you.

      • In replying to Kate and Jon, I thought I’d draw on what the Director of the Business School said in the end of term plenary session with MBAs last Friday. Christoph explained our approach towards rankings, which is that we will not let the rankings drive our behaviour because if we did, we would lose our soul as we will be pulled in inconsistent directions. For example, we made a strategic decision to reduce the size of our Phd cohort so that we could increase the ratio of faculty to Phd students. That hit our FT rankings because the doctoral rank accounts for 10% of the entire ranking. We will also continue to keep our class size constant to maintain the closely-knit character of the class even though it means that the MBA alumni network might not be as large as other schools, which is a feature of the Economist rankings.

        We want to do better in the rankings and we will do that by focusing on parts of the rankings which align to our values. One big component is employment outcomes and we revamped our career services two years ago and we are beginning to see some results, e.g. 97% employed after three months from MBA2010 is a good showing. We are part of a research university and Christoph is focused on improving our research output, which counts for the FT as well. With time, we will, in the words of one of our students, “be consistently on the first page” and people can judge us on our distinctive set of values.

        Conrad
        @CambridgeMBA on twitter

        • Good luck with the “strategic decision”.

          Just remember not to deviate too far from the reality; which is that students are investing a huge amount of money for the main reason of securing a better career.

          Rankings inform employers and lead to their decisions on where to direct their recruitment resources. It is no mystery why top employers recruit more from top or higher ranked schools. To ignore rankings (or to give “justifications”) will only lead to employers continuing to place Cambridge MBA students behind other MBA schools’ students in their considerations.

          • The reality is that employers hire students based on their individual strengths. 97% of our students who graduated last year and were seeking employment accepted jobs within 3 months, and the schedule for this term’s campus recruitment included top employers such as McKinsey, BP and Lloyds.

            Conrad
            @CambridgeMBA

        • “be consistently on the first page” is a good way to put it, but I am not getting the sense from your posts that this is a goal that is taken seriously and is well defined.

          It also sounds slightly inconsistent to say that you are reducing the size of the Ph.D. program, and at the same time you want to increase research output. Research output is directly correlated with number of graduate students, and I am sure that Cambridge has no shortage of quality applicants for research positions.

          • We decided to reduce the PhD intake so that we could increase the ratio of faculty to PhDs. This means a higher level of supervision from faculty and ultimately a higher quality of research output.

  4. Interesting post and a late comment, but still sharing my opinion. As a market researcher, I know that any consumer survey will have overstatements or understatements based on the geographic sample composition. So if rankings are based on Alumni survey responses alone then surely the reliability of the findings go for a toss. But assuming we overlook this factor there is a fundamental problem. The rankings weight 90% of importance on just lagging indicators – Recruiter perceptions will be based on the previous two graduating class and not the current class and the same goes for alumni, I suppose (as the graduating class possibly don’t answer the surveys). Perhaps adding a current indicator and if possible even a leading indicator as variables will strengthen ranking reliability.

Leave a reply