Are you perfectly rational? Neither am I.
Underlying most theoretical models in management science and economics is the assumption that people have a flawless understanding of their environment and can think infinitely in any given moment—perfect rationality. As extreme as that sounds, it has been an incredibly important foundation. Why? Because to develop an understanding of how humans behave in aggregate in a market, system, or organization, we need to quantifiably define how any single individual behaves. In other words, the assumption of perfect rationality has been a necessary pre-condition for producing research that has gone on to provide critical answers and win Nobel Prizes.
At the same time, we all instinctively know that perfect rationality is, well, fiction.
In recent years, psychologists and behavioral economists have generated a growing list of examples of deviations from perfect rationality. Evidence is now abundant that individuals have limited cognitive resources, can misperceive their environments, and make predictably poor decisions in certain cases.
Evidence is now abundant that individuals have limited cognitive resources, can misperceive their environments, and make predictably poor decisions in certain cases.
But, still, there is a disconnect between theory-building and empirical evidence. While it is popular to scold economists and management scientists for turning a blind eye to behavioral research, I have found that the blame should lie as much, if not more, with the behavioral researchers providing the evidence. Their psychological findings are important and interesting, and yet they haven’t provided constructive ways to analyze human behavior in aggregate or in complex settings. Behavioral scientists have been good at documenting behaviors but poor at providing unifying explanations, particularly in a way that facilitates communication across disciplines. I feel comfortable dealing this rebuke because I have been one such behavioral scientist.
In a new paper I co-wrote with Jordan Tong, an operations and information management professor at the University of Wisconsin, which is forthcoming in Management Science, we take a step toward remedying this problem. Our collaboration combines my expertise in the psychology of decision-making with his knowledge of operations to develop a “behavioral model” of how individuals assess risk as they make decisions. Our model is more psychologically plausible than perfect rationality, but still translates well into math, making it useable for theorists.
Here is how our model works. In most past work, scholars use a mathematical tool called a random variable to represent the uncertainty faced by the decision maker and assume that the individual perfectly knows the nature of that uncertainty.
In our model, rather than assuming that people always know the exact random variable they face, the individuals think of a set of possible outcomes that might occur. Based on that set of possible outcomes, they assess what they think will happen on average and how confident they are in that belief. Notice two important features: (1) people think of a small (or at least non-infinite) sample of possible outcomes, and (2) people naively believe that the characteristics of their small sample are representative of the uncertainty. These may not seem like terribly egregious mistakes but, interestingly, they combine to generate a lot of deviations from perfect rationality.
Why do projects consistently take longer than expected? Why are we so overconfident in the accuracy of our own predictions but well-calibrated when assessing other people’s predictions? Why are most people naive about the likelihood of rare “black swan” events while a few are overly paranoid about them? Why are we so often disappointed by the strategies we select? Somewhat surprisingly, all of these questions can be tied back to the simple psychological premise of our model: naiveté with a small sample. There is elegance and power in being able to account for a lot of behavior with one explanation.
Our hope is that this work can serve as a bridge between psychology and management science. We intentionally created a building block model, so any place where a scholar might typically assume perfect knowledge of risk, they can plug in our behavioral model and see if it leads to a different outcome. Sometimes accounting for more realistic human judgment will be trivial, but sometimes it will be absolutely vital for understanding how to best run organizations, seek business opportunities, and manage risk.
Even the little mistake of over-generalizing from small samples is sufficient to cause a lot of misjudgments. A wise manager recognizes when she has only a small sample from which to make judgments and adjusts her confidence downward accordingly. It is always best to make decisions with well-calibrated confidence; only after that should one execute with conviction.