Tuck professor Andrew King warns of the natural human tendency to see patterns in chaos.
In 2004, a 10-year-old grilled cheese sandwich, reverently preserved by a Florida woman, sold for $28,000 on eBay. The reason for its high price was that some viewers claimed to see an image of the Virgin Mary in the toasted bread. Although a number of similar sightings—in everything from cinnamon buns to dental x-rays to nebulae—have made their way through the mass media in recent years, this phenomenon is as old as humanity itself. Perceiving familiar images out of randomness, or pareidolia, is a form what’s known as apophenia, or our inherent human desire to locate patterns and connections out of chaos.
While the average person may find it harmless or even charming to give play to these fancies—identifying familiar shapes in fluffy clouds, pointing out the man in the moon—for scientists searching large masses of data, this innately human trait can ruin studies and give inconclusive or even false results. According to a new paper, “Scientific Apophenia in Strategic Management Research,” by Tuck professor Andrew King and Brent Goldfarb of the University of Maryland’s Robert H. Smith School of Business, however, too few scientists are actively safeguarding against this inadvertent bias.
“The funny thing is that when we search Big Data—and Big Data is what everybody is talking about these days—we look for results and we don’t often know what those results are,” King says. “I’ve been growing more concerned that a lot of what we’re finding is not real.”
One of the root causes for his concern is supercomputers. In the last 20 years, dramatically increased computer power—particularly within statistical packages—has allowed researchers to do more analyses and trial runs with many different models and specifications. And yet researchers have a tendency to forget that with each analysis there is a one-in-20 chance of getting something that appears to be significant but is actually just noise.
In the paper, the authors describe several simulations that show just how easy it is for even a seasoned scientist to find something that looks like a meaningful pattern in randomness, especially if they are intent on fishing until they “find” something significant. By analyzing a random sample of articles from five top strategy research journals, the authors were able to estimate that anywhere between 12 and 40 percent of the reported findings were likely to be false or vastly overstated.
The problem isn’t maliciousness or duplicity, King says. Instead, it’s that researchers are under such enormous pressure to find scientific patterns and publish their results. The result is that best practices are frequently—although unintentionally—taken for granted. One of the major reasons why, according to the authors, is the academic community’s strong resistance to replicating findings in lieu of accepting what is “known.”
This tendency, coupled with the massive global increase in journal submissions, has created a situation where a broad range of disparate reviewers, who may lack the experience of researching without massive computer power, ends up becoming more tolerant of less-than-stringent science.
“We propose some safeguards so researchers can be sure that they’re not fooling themselves,” King says. “One way is to split the data: you go look at half and then test the other half. Another way is results-blind reviewing, where you propose the study, it’s accepted, then you come up with the results. We’re not casting aspersions on the reputation of any field—we just hope that our work causes people to think a little bit about this problem and behave a little differently.”
Most surprising of all the authors’ findings, however, is that apophenia is having an impact on multiple scientific fields. For example, conventional wisdom for Ph.D. psychology students has long held that if an experiment with 30 people fails to yield a result, it should be run again with 60 people, and so on, until a “significant” result is found.
Regardless of the field or the function of the researcher, the authors conclude, the biggest obstacle to eliminating scientific apophenia will always be the same thing that makes scientific research possible in the first place—our intrinsic human nature.
“We believe that, whatever the risk, further evaluation and increased caution will benefit both strategy scholars and the field,” the authors write. “It is a human tendency to find order in the world. As scholars, we must be careful to ensure that the order we perceive is indeed real.”