Michael Mauboussin on Luck

A commenter recommended him, and I watched this video.

Some interesting points:

1. As skill in an activity becomes more refined, differences in skill at the high levels tend to narrow. As a result, luck starts to play a relatively larger role. In some sense, it took more luck for George Brett to hit .380 than it did for Ted Williams to hit .400. The standard deviation among major league hitters was higher in Williams’ day, so that when batting averages are scaled by standard deviation, Brett’s best year has a higher Z score than Williams’. The AP statistics text that I use has a problem in which the student calculates this. Mauboussin also refers to this example.

2. He says that luck is difficult to define, but a key element is that it is reasonable to say that the outcome could have been different. He opens the talk by telling a story of how he interviewed for his first job out of college and noticed that highest level executive with whom he interviewed had a trash can with a logo of the Washington Redskins. Because he praised the trash can, the high level executive eventually overrode all the other interviewers and gave Mauboussin the job. I have a similar story about getting into Swarthmore College, which I have told here before. A local alumnus interviewed me, and I guessed that he was the father of a guy I had seen wrestle for the state championship. I talked about that match, and when I came to Swarthmore the Dean of Admissions said that the wrestling coach, Gomer Davies, was looking forward to having me on the team. Having no aptitude for wrestling myself, I never went near Coach Davies.

3. He says that extraordinary success comes neither from pure skill or pure luck. You need a high skill level to compete. But beyond that it takes luck to have extraordinary success. Bill Gates had the skill to do well in technology, but his extraordinary success required luck. If you do not know the story of how Microsoft was awarded the role of providing the operating system for the IBM PC in the early 1980s, then you can get an abbreviated version by listening to the Q&A at the end of the talk.

4. He says that when you are the less skilled player, you need to try to complicate the game. Try a trick play, for example. I would add that you should look for short games. The longer the game goes on, the more likely it is that the other player’s skill advantage will win out. Think of a board game (or a business competition) that consists of many moves. Suppose that on each move, the player with more skill has a 51 percent chance of doing better than the less-skilled player, and conversely the less-skilled player has a 49 percent chance of making a better move. In a short game, the lesser-skilled player might win. But if the game goes on for hundreds of moves, the chances of the lesser-skilled player dwindle. I first wrote about this twenty years ago. I still love that essay.

5. He says that our brain has an “interpreter” that is determined to tell causal stories, even if it has to make them up. There are some famous split-brain experiments that demonstrate this. The interpreter prefers explanations that make something appear inevitable, rather than lucky.

I would illustrate this with the financial crisis of 2008. Although economists did not anticipate the crisis beforehand, they explain it now as if it were inevitable. I just finished Days of Slaughter, Susan Gates’ insider’s account of the fall of Freddie Mac, and it reminds me of the role played by bad luck, in particular the unfortunate choice made by Freddie Mac’s Board to install as CEO in 2003 one Richard Syron, who combined an absence of experience in mortgage lending with an arrogant unconcern for that lack of experience. Another CEO might have charted a different course for Freddie Mac, and for the entire housing market.

Another illustration is the election of Donald Trump. It was very much a random event, but everyone’s interpreter strains to see it otherwise.

6. During the Q&A, he considers the issue of whether popular opinion is right or wrong. When does the wisdom of crowds become the madness of crowds? He says that a necessary condition for wisdom of crowds is diversity of opinion. The crowd is most at risk of going wrong when there is what he calls a “diversity breakdown,” and everyone is thinking alike.

12 thoughts on “Michael Mauboussin on Luck

  1. Does that Swarthmore story mean what you think it means? (Or what I think you think it means?) You seem to think that your discussion of wrestling with a wrestling fan increased you chances of getting into Swarthmore. I highly doubt that.

    First, alumni interviews play no meaningful role in elite college admissions (and didn’t 30 years ago either). Admissions officers are smart and they know that alumni have no ability to discern talent among the 17 year-olds they talk to.

    Second, athletics does (and did) matter to admissions (and coaches) but, again, these people are smart. They don’t give someone a preferences just because a random alumni interviewer mentioned that someone is a wrestler. They give a preferences based on good information that someone can/will wrestle in college.

    The most likely scenario is that the interviewer wrote a note for you file that mentioned wrestling. The admissions dean read that note and then said what he said to you, either because the note was wrong or he misread/forgot it.

    Swarthmore did not admit you because it thought you were a wrestler. Of course, your larger point about luck is correct.

    • While I was at Swarthmore, a professor did an empirical study of the admissions process, and the interviews were shown to be very important. Moreover, applicants with very high SAT scores tended to be selected against in the admissions process. I had a low GPA relative to top schools, and I was rejected by every other prestigious school to which I applied, so I am pretty confident that it really was the interview that got me in.

      • I am no expert on the Swarthmore admissions process 40 years ago, but this still seems very suspect to me. Please bring some of your usual (excellent!) skepticism of empirical work to the claims from this study.

        First, how was this even done in the 1970s?! I am unaware of any college admissions office that had computerized its work flow so early. I would bet a lot of money that Swarthmore did not have a handy database of all applicants, their SAT scores, GPAs and interview ratings. So, doing this work would have required entering all this information by hand into a computer mainframe . . .

        Second, shouldn’t we always be most suspicious of studies that yield politically appealing results. I am sure that the Swarthmore Administration loved hearing that alumni interviews are so important, and I am sure that they loved telling the alumni that!

        Third, even assuming that we got all this data in a computer (correctly!) in 1970 and ran some regressions, just how was this study conducted? For example, if interviewers knew the scores/grades of the applicants — either because the College told them or because they asked the applicants themselves — then it is really hard/impossible to tease out the importance of interview ratings.

        Again, I am not disputing your broader point that, when interviewing, you want to talk about things the interviewer likes as much as possible. You were smart to discuss wrestling! But your claim seems to be that you got in because Swarthmore thought you would be a wrestler — not simply because the interviewer rated you highly because you/he bonded over a wrestling discussion. I just think that Swarthmore was smarter than that . . .

        • I was the research assistant on the study. So of course there is no reason for skepticism (grin). The professor who did the study did get the result he was after, which might be a reason to be skeptical. His suspicion was that the admissions office was not going for the best students. The regression showed that, controlling for other things, students with the very highest SAT scores (over 1400 in those days) had *less* chance of being admitted than students who got in the 1250-1350 range, and that this was due to the way interviewers seemed to downgrade the high-scoring students. He used the results to argue that the admissions office was putting too much weight on interviews.

          • Wow! That is really interesting! You ought to write a post about what you remember of the experience. This must have been one of the first (statistical) studies of college admissions.

            But I still have doubts. Assume that you have all the data and it was entered correctly. First, were alumni ratings actually helpful in forecasting admissions once grades/scores were taken into account? For example, did the regression fit improve — and my mind is still reeling to think that you were fitting a logistic regression on a mainframe in 1972 — when alumni ratings were added to a regression which already included scores/grades? My wager is that the fit did not improve. And, unless you asked that question, it is hard to know how much alumni ratings mattered.

            If, instead, you just had one model that included scores/grades/ratings, it is awfully hard to know what matters. For example, if the interviewers knew the grades/scores — and I bet some/many did — then any correlation between ratings/admissions might be due to interviewers rating more highly students with better grades/scores, and it would be perfectly reasonable if they did!

            Anyway, I bet you had just one model with all variables included, and maybe with some categorical variables like, say, an indicator for high scores or maybe some interactions between, say, scores and ratings.

            Surely you can see how hard it is to make, from that, a claim that interviewers down-graded high scorers and that this effect caused admissions of high scorers to be less than it would have been had it not done so?

            Moreover, it is not clear (or was it at that time?) what Swarthmore was asking interviewers to do in their rating. Was there job to just rate students on likely success at Swarthmore, and therefore such ratings, if competently done, would be highly correlated with scores/grades? Or was there job to provide ratings that “adjusted” for those variables? That is, were they supposed to indicate students who were, in truth, better than their scores (causing them to give high ratings to some poorer applicants with real potential) and students who were worse than their scores (causing them to give low ratings to some better applicants who were lower quality than their scores/grades might make you think)?

          • I have no idea what interviewers were told to do. But many of these interviewers, including the one who interviewed me, were alumni for whom this was not a full-time job. I doubt if they were given anything close to the clarity of instruction that you have in mind. In those days, college visits were not such a thing. I did my interview in the St. Louis area, where I lived.

            Also, in those days, the college admissions process was less systematic than it is today. Instead, it was more informal. As an aside, I am convinced that when it came to selecting my class, the admissions people were told to try to get a less radical student body. The college had just undergone a trauma of having a college President die of a heart attack during a demonstration by students carrying guns. In any case, my entering class included a tremendous number of classical musicians, who were very straight-and-narrow relative to the demographics of the time.

            We were running a logit regression (and I think we just used transformation of the variables to get a linear regression program to do it). The alumni interview had a separate effect apart from scores/grades, and it was shockingly strong. Again, my only caveat is that I think that the professor suspected this and wanted to show it, so I don’t know to what extent we did specification searching to get the result. But I believe it was real.

  2. If you believe that events are caused then attributing something to luck is simply a short-hand way of saying that they are beyond our ability to predict because of the complexity of the causal chain. such events are no less inevitable than any other events. We say that the outcome of a coin flip is “random,” but of course if we knew with precision all of the physical forces at work on the coin and had the time to analyze them the outcome would be completely predictable.

  3. Randomness certainly plays a part in human affairs. But purposefulness, intelligence, and physical ability do, too. The interactions among those three attributes are probably beyond understanding. I therefore reject any analysis which purports to show that a particular result of human activity is random.

  4. Over the last.. hm may be 5 years, it has become a stylized fact among sport fans in the know that the underdog should always adopt a high variance strategy. In football that includes playing a slow tempo that limits the number of drives (effectively shortening the game even if it doesn’t shorten it nominally).

    This is one of the reasons Peyton Manning was the Terminator. His offense was dead simple. This allowed him to control his tempo and keep his variance was very low, a winning meta-strategy for someone that will always be the best player on the field.

    https://web.archive.org/web/20120526014541/http://smartfootball.com/offense/peyton-manning-and-tom-moores-indianapolis-colts-offense

  5. “He says that when you are the less skilled player, you need to try to complicate the game.”

    True no doubt, but one might add that “complicating the game” is a skill all its own. The comparative advantage of that skill much on display in public choice problems of one stripe or another.

  6. Hi Arnold,

    Nothing much to add here, but just that point #4 is stressed by bridge players all the time, which I know you enjoyed reading about 🙂

  7. What makes Trump’s election victory a random event?

    I presume this claim is simply based on the fact that the win over Hillary was so narrow. But is Trump’s election victory any more random than other close call victories?

    Also, Trump’s victory in the Republican primaries was a landslide. It wasn’t a narrow win at all. Was that a very non-random event?

Comments are closed.