The human mind is the end product of hundreds of thousands of years of relentless natural selection. You would expect that such an exquisite piece of software should be capable of representing reality in an accurate and objective manner. Yet decades of research in cognitive science show that we fall prey to all sorts of cognitive biases and that we systematically distort the information we receive. Is this the best evolution can achieve? A moment’s thought reveals that the final goal of evolution is not to develop organisms with exceptionally accurate representations of the environment, but to design organisms good at surviving and reproducing. And survival is not necessarily about being rational, accurate, or precise. The target goal is actually to avoid making mistakes with fatal consequences, even if the means to achieve this is to bias and distort our perception of reality.
This simple principle is the core idea of error management theory, one of the most interesting frameworks to address systematic biases in perception, judgement, and decision making. From this point of view, our cognitive system is calibrated to avoid making costly errors, even if that comes at the expense of making some trivial errors instead. For instance, a long tradition of research on the illusion of control shows that people tend to overestimate the impact of their own behaviour on significant events. An advocate of error management theory would suggest that falling into this error is perhaps not as costly as the opposite mistake: Failing to detect that one has control over some relevant event. Consequently, evolution has endowed us with a predisposition to overestimate control.
In a way, science is set of tools specifically designed to overcome this bias: Research methods and statistics were conceived to counteract our tendency to see patterns where there is only chance and to ignore alternative explanations of the events we observe. Perhaps these biases were useful to survive in the Savannah, but they are definitely not your friends when you want to discover how nature works. Unfortunately, these refined methods are unlikely to work perfectly if the key asymmetry that gave rise to the biases remains intact. Whenever the evidence is ambiguous, we will always be tempted to interpret it in the most favourable way, avoiding costly errors.
Imagine that you are a young scientist trying to find a pattern in your freshly collected data set. You can think of two ways to analyse your data, both of them equally defensible. Following one route, you get a p-value lower than .05. In the alternative analysis your result is not significant. Maybe you have discovered something or maybe you have not. One of these interpretations will allow you to publish a paper in a prestigious journal and keep your position in academia. If you decide to believe the opposite, you have just wasted several months of data collection in exchange for nothing and you eventually may have problems to make ends meet. None of these beliefs is perfectly sure. But, understandably, if you have to decide what error to make, you will prefer it to be a Type I error.
More than ten years ago (2005), John Ioannidis concluded that most published research findings must be false. Given the asymmetric costs of Type I and Type II errors for researchers, such a terrible bias in the scientific literature is exactly what you would expect to find according to error management theory. The current debate about the reproducibility of psychological science and other disciplines has focused extensively in developing new methods and developing a new culture of open science. However, even if these new practices are badly needed, they are unlikely to put an end to biases in the scientific literature. There will always be contradictory findings, experiments with inconsistent results and analyses leading to opposite conclusions. Biases will persist as long as researchers find some interpretations of the data more useful than others, even if they are inaccurate or overly wrong. Error management theory predicts that scientific ‘illusions’ are the natural consequence of the reward structure imposed by scientific institutions. Without a radical change in the distribution of incentives, all the other measures can only have a limited impact on the quality of scientific research.