One of the principles of critical thinking is to avoid excessive generalizations. The uncritical application of this principle in many cases leads to the formulation of ambiguous judgments about reality, which obscure what we can know and sometimes even falsifies it. This applies in particular to scientific assertions. The tendency to avoid unambiguous judgments is due to several reasons: fear of making mistakes, fear of responsibility, fear of confrontation, and confusion between tolerance and indifference. Forming unambiguous judgments in situations that require it supports the standards of scientific integrity and affords the epistemological agency we need to grow, change, create, and proceed.
We are all going to die, and in the meantime we have to pay taxes. Apart from this hackneyed assertion, it is difficult to formulate any other universal declarations containing such general quantifiers as “everybody,” “always,” “every,” or “nobody.” Despite popular opinion, absolute exceptions do not prove the rules, but rather categorically compromise them; thus, it is better to avoid unambiguous and categorical declarations, especially when formed with reference to our complex social reality. This approach is an element of critical thinking which most educated people follow, such that surely intelligent people see the formulation of unambiguous “universal truths” as something of a social faux pas.
However, when we begin to unthinkingly employ any kind of principle, albeit one based on critical thinking, we are prone to fall into a trap from which it is difficult to extract ourselves. In this case, what about the roles that unambiguity and rigidity play in science? In considering the science around cold fusion and neutrinos’ speed-of-light travel, only a brutal approach to evidence – not a cautious or ambiguous one – makes it possible to forgo equivocating and actually formulate theorems which allow us to put a space shuttle in orbit, reach our destination with the help of GPS data, or make calculations for the construction of an imposing suspension bridge across the bay. But unfortunately, many academics prefer to remain mired in the murky waters of ambiguity, professing an almost religious belief in its superiority and reacting anxiously in situations, which in the meantime, demand categorical unambiguity to progress the science forward and to make it useful.
Most of the social sciences also find themselves in this trap. We always hear of the need for “measured judgment,” “more discussion,” “refraining from unambiguous assessment,” and other similar demands whenever some scandal in the social sciences surfaces. Such was the case in 2018, when French documentarian Thibault Le Texier published his book Histoire d’un Mensonge [A History of Lie] (1), in which he showed that the famous Stanford prison experiment conducted by Philip Zimbardo had been faked. I have yet to discover even one unambiguous assessment by a scientist of this fact. Most of them consider that during his experiment Zimbardo “very often” was faithful to the principles of honesty, and that statements about the unacceptable manipulation of subsidiary factors invalidating any of the research results are considered to be “too radical,” and that the problem is “more complex.” However I have read two declarations by renowned academics stating unambiguously that “they believe Zimbardo to be an honest scientist”.
The main reasons for falling into the trap that ambiguity is superior to unambiguity are our fears, of which we are barely aware and which are fuelled by social pressure. One of them is the fairly rational fear of making mistakes. This is particularly justified when we use the principles of probability in formulating certain properties, and this situation most often appears in the social sciences. However, this sometimes leads us to somewhat absurd conclusions, as shown by renowned statistician Jacob Cohen when he entitled his article “The Earth is Round (p < .05)” (2). However, despite the fact that a quarter of a century has passed since the publication of this renowned work, the majority of academics paying homage to ambiguity fail to perceive its ridiculousness.
The fear of making mistakes is related unambiguously to the fear of responsibility. In the Middle Ages, it was the tradition to make the constructor of a bridge stand under it while it was being tested. Standing under the bridge, the builder, employed by a king or a prince, laid his head on the line. This custom has long since disappeared; the designer is frequently absent during the opening of a bridge, and this has come about because of the need for unambiguous parameters in the construction of such objects. The representatives of science paying homage to ambiguity instinctively avoid responsibility when forming their judgments. Many years ago, I was approached by a certain businessman who asked me for help in selecting workers for arduous seasonal farm work. Important for him was to check the motivation of the workers and give some kind of guarantee that those who were chosen would not abandon their work. I consulted several psychologists, including a professor and several doctors, about the problem. The majority of those I asked were willing to take on this fairly lucrative work, but not one of them wanted to give any kind of guarantee as to the outcome. Similarly, no financial advisor ever lost his own money by giving bad advice to his clients. Choosing ambiguity is like swimming in muddy water, because it is more difficult to reckon with somebody who offers unclear and conditional prognoses, than with somebody who expresses an unambiguous opinion.
In formulating assertive judgments, we undermine the judgments of others. Certainly we throw doubt on fuzzy judgments concerning that which we are adjudicating, running the risk of confrontation, criticism and revenge. Fear of confrontation is the next fear which holds us back from unambiguity. Robert Plomin – probably the greatest living behavioural geneticist – held up publication of his book Blueprint: How DNA Makes Us Who We Are (3) for 30 years, because its conclusions based on his genetic research were too unambiguous, and they strongly contradicted that in which the majority of psychologists deeply believed, namely, that our environment is the major factor in influencing our behaviour. As Plomin says himself, he would have been crucified by other psychologists if he had published his conclusions at the time he had formulated them. This example shows that fear of unambiguity is fuelled by our social environment. Fear of rejection is one of the strongest mechanisms contained in this group. This evolutionary mechanism also has a strong influence on how we formulate our judgements, since during most of the time Homo sapiens has been evolving, ostracism meant certain death. If we say of scientific deception that “it’s something of a complex problem which needs deeper analysis,” we certainly won’t offend many people, and we will be regarded as a moderate person not given to expressing rash and radical opinions.
Precisely when fearing rejection and pursuing acceptance, we have confused tolerance with indifference. The former occurs when we don’t interfere with the way in which our neighbour prays to his god. The latter occurs when we don’t interfere when our neighbour throws acid in his wife’s face in the name of his god or when someone refuses to save his child’s life by blood transfusion, causing unnecessary suffering. In the social sciences, the border between tolerance and indifference was blurred long ago. Tolerance can be expressed by not interfering in research subjects conducted by scientists who have devoted their whole life to it, such as the so-called Rorschach ink blot test, despite the fact that this method gives no hope that it will ever become a useful diagnostic tool. We can pass their studies by indifferently, although it’s worth reflecting whether we are such a rich society as to indulge in funding such research. However, tolerance becomes cold cruel indifference when those scientists induce their research students to employ valueless diagnostic tools which are then used against others in the court room to in some degree decide their fate. Almost a quarter of all forensic psychologists do not hesitate in using the 100-year-old Rorschach Test when examining children in custody cases, and in this way decide about their future lives (4).
Indifference in relation to researchers who engage in pseudoscience, conduct poor-quality research, avoid sharing original data, shun replication, falsify research results, and finally teach their students nonsense is widespread in the social sciences, and it is accepted because it is cloaked in the guise of tolerance.
Advances in science are possible mainly because we can falsify hypotheses, and in the heat of research we either reject a theory as false or accept it as true. It is impossible to falsify an ambiguous hypothesis, and if such do appear they are condemned to oblivion or the purgatory of eternal inconclusive research. These banal truths cease to become obvious when academics leave their laboratories and enter public discussion. In some strange way they acquire the conviction that somebody can be “very often honest,” or that “the falsified research tells us something about the nature of man”: they seem to believe that a hundred years of research which has not yielded one concrete result does not mean that the research methods should be condemned to oblivion but rather “require discussion.” In the opinion of many of them a reluctance to share original research data is “a complex problem requiring further analysis,, and that an aversion to replicating research is “a multi-aspect phenomenon whose cause is impossible to define unambiguously.”
Science was born out of man’s conflict with ignorance, in other words the need to replace the belief “we don’t know, we don’t understand” with the certainty that “we know and understand.” Over the centuries, we have painstakingly reduced a vast area of ignorance by intricately compiling small jigsaws of knowledge. However, when we treat a lack of knowledge as ambiguous knowledge, we blur the fundamental border in science to the detriment of what we know for sure and what is unambiguous. Unambiguous knowledge is that which the mathematician and philosopher William Clifford described in the mid-nineteenth century as a “safe truth” (5), and which today few remember.
It is actually thanks to safe truth that today, during a solar eclipse, we do not fall to our knees in fear and terror and do not ask the gods for mercy, but instead observe this phenomenon with curiosity, and explain its mechanism to our children. It is thanks to unambiguous knowledge that we have converted our eternal dreams of taking to the skies into reality, and have today made the plane the world’s safest means of transport. It is thanks to safe truth about DNA testing that in the USA alone 365 people unjustly convicted on ambiguous premises and ambiguous knowledge have been cleared of charges and released from prisons, 21 of them avoiding the death penalty.
Science does not draw a black and white picture of the world in front of us, but a world divided into one we can understand and one we have yet to meet. When required, there is value for human agency and our endeavours when practicing unambiguous stances and approaches. Indeed, labelling a lack of knowledge as ambiguity merely creates an illusion of knowledge.
- Le Texier, T., ”Debunking the Stanford Prison Experiment”, American Psychologist, 2019.
- Cohen, J., ’The earth is round (p < .05)”, American Psychologist, 1994.
- Plomin, R., ”Blueprint: How DNA Makes Us Who We Are”, 2018.
- Butcher, J.N.,”Oxford Handbook of Personality Assessment”, 2009.
- Clifford, W., ”The Ethics of Belief, Lectures and Essays”, 1886.
3 thoughts on “The praise of unambiguity”
Thank you for the very interesting and well written analysis! I agree science shouldn’t be ambiguous, as a general rule. There are however circumstances in which ambiguity can have it’s value – such as when we possess data with no clear or evident explanation. Such data can perhaps be suggestive of a certain theory, bit may not be sufficient to demonstrate it accurately.