Why won't scientific evidence change the minds of Loch Ness monster true believers?
You may have noticed a curious recent announcement: An international research team plans to use state-of-the-art DNA testing to establish once and for all whether the Loch Ness monster exists.
Regardless of the results, it's unlikely the test will change the mind of anyone who firmly believes in Nessie's existence. As a philosopher working on the notion of evidence and knowledge, I still consider the scientists' efforts to be valuable. Moreover, this episode can illustrate something important about how people think more generally about evidence and science.
Discounting discomfiting evidence
Genomicist Neil Gemmell, who will lead the international research team in Scotland, says he looks forward to "(demonstrating) the scientific process." The team plans to collect and identify free-floating DNA from creatures living in the waters of Loch Ness. But whatever the eDNA sampling finds, Gemmell is well aware the testing results will most likely not convince everyone.
A long-standing theory in social psychology helps explain why. According to cognitive dissonance theory, first developed by Leon Festinger in the 1950s, people seek to avoid the internal discomfort that arises when their beliefs, attitudes or behavior come into conflict with each other or with new information. In other words, it doesn't feel good to do something you don't value or that contradicts your deeply held convictions. To deal with this kind of discomfort, people sometimes attempt to rationalize their beliefs and behavior.
In a classic study, Festinger and colleagues observed a small doomsday cult in Chicago who were waiting for a UFO to save them from impending massive destruction of Earth. When the prophecy didn't come true, instead of rejecting their original belief, members of the sect came to believe that the God of Earth changed plans and no longer wanted to destroy the planet.
Cult members so closely identified with the idea that a UFO was coming to rescue them that they couldn't just let the idea go when it was proven wrong. Rather than give up on the original belief, they preferred to lessen the cognitive dissonance they were experiencing internally.
Loch Ness monster true believers may be just like the doomsday believers. Giving up their favorite theory could be too challenging. And yet, they'll be sensitive to any evidence they hear about that contradicts their conviction, which creates a feeling of cognitive discomfort. To overcome the dissonance, it's human nature to try to explain away the scientific evidence. So rather than accepting that researchers' inability to find Nessie DNA in Loch Ness means the monster doesn't exist, believers may rationalize that the scientists didn't sample from the right area, or didn't know how to identify this unknown DNA, for instance.
Cognitive dissonance may also provide an explanation for other science-related conspiracy theories, such as flat Earth beliefs, climate change denial and so on. It may help account for reckless descriptions of reliable media sources as "fake news." If one's deeply held convictions don't fit well with what media say, it's easier to deal with any inner discomfort by discrediting the source of the new information rather than revising one's own convictions.
Philosophy of knowledge
If psychology may explain why Loch Ness Monster fans believe what they do, philosophy can explain what's wrong with such beliefs.
The error here comes from an implicit assumption that to prove a claim, one has to rule out all of the conceivable alternatives – instead of all the plausible alternatives. Of course scientists haven't and cannot deductively rule out all of the conceivable possibilities here. If to prove something you have to show that there is no conceivable alternative to your theory, then you can't really prove much. Maybe the Loch Ness monster is an alien whose biology doesn't include DNA.
So the problem is not that believers in the existence of the Loch Ness monster or climate change deniers are sloppy thinkers. Rather, they are too demanding thinkers, at least with respect to some selected claims. They adopt too-high standards for what counts as evidence, and for what is needed to prove a claim.
Philosophers have long known that too-high standards for knowledge and rational belief lead to skepticism. Famously, 17th century French philosopher René Descartes suggested that only "clear and distinct perceptions" should function as the required markers for knowledge. So if only some special inner feeling can guarantee knowledge and we can be wrong about that feeling – say, due to some brain damage – then what can be known?
This line of thought has been taken to its extreme in contemporary philosophy by Peter Unger. He asserted that knowledge requires certainty; since we are not really certain of much, if anything at all, we don't know much, if anything at all.
One promising way to resist a skeptic is simply not to engage in trying to prove that the thing whose existence is doubted exists. A better approach might be to start with basic knowledge: assume we know some things and can draw further consequences from them.
A knowledge-first approach that attempts to do exactly this has recently gained popularity in epistemology, the philosophical theory of knowledge. British philosopher Timothy Williamson and others including me have proposed that evidence, rationality, belief, assertion, cognitive aspects of action and so on can be explained in terms of knowledge.
This idea is in contrast to an approach popular in the 20th century, that knowledge is true justified belief. But counterexamples abound that show one can have true justified belief without knowledge.
Say, you check your Swiss watch and it reads 11:40. You believe on this basis that it is 11:40. However, what you haven't noticed is that your typically super reliable watch has stopped exactly 12 hours ago. And by incredible chance it happens that, now, when you check your watch, it is in fact 11:40. In this case you have a true and justified or rational belief but still, it doesn't seem that you know that it is 11:40 – it is just by pure luck that your belief that it's 11:40 happens to be true.
Our newer knowledge-first approach avoids defining knowledge altogether and rather posits knowledge as fundamental. It's its own fundamental entity – which allows it to undercut the skeptical argument. One may not need to feel certain or have a sensation of clarity and distinctness in order to know things. The skeptical argument doesn't get off the ground in the first place.
Knowledge and the skeptic
The eDNA analysis of Loch Ness may not be enough to change the minds of those who are strongly committed to the existence of the lake's monster. Psychology may help explain why. And lessons from philosophy suggest this kind of investigation may not even provide good arguments against conspiracy theorists and skeptics.
A different and, arguably, better argument against skepticism questions the skeptic's own state of knowledge and rationality. Do you really know that we know nothing? If not, then there may be something we know. If yes, then we can know something and, again, you are wrong in claiming that knowledge is not attainable.
A strategy of this kind would challenge the evidential and psychological bases for true believers' positive conviction in the existence of Nessie. That's quite different from attempting to respond with scientific evidence to each possible skeptical challenge.
But the rejection of a few true believers doesn't detract from the value of this kind of scientific research. First and foremost, this research is expected to produce much more precise and fine-grained knowledge of biodiversity in Loch Ness than what we have without it. Science is at its best when it avoids engaging with the skeptic directly and simply provides new knowledge and evidence. Science can be successful without ruling out all of the possibilities and without convincing everyone.
Provided by The Conversation
This article was originally published on The Conversation. Read the original article.