By Laurie Weston
“Trust the science!”
It is a rallying cry heard frequently these days intended to signal truth, intelligence, and righteousness. Whether in climate change discussions or pandemic responses, energy posturing or political systems, it seems the mere mention of “science” implies incontestable and definitive proof.
That is ironic, considering science itself is based on observation, theory, models, and inferences, all of which can contain significant uncertainty. Imagine a simple scientific experiment to determine the boiling point of water. Our theory is that water boils at 100C. One thousand different scientists can do 1,000 experiments and “prove” that result every time. However, if one scientist happens to live on a mountain, his beaker of water will boil at only 92C. He might conclude, therefore, that elevation makes a difference. Our theory must adapt to accommodate this new observation and contributing factor.
Another scientist at the same mountain elevation heats her water on a different day and finds a boiling temperature of 89C. Assuming the data is correct, why the discrepancy? The new, improved theory that includes an elevation factor is obviously missing something. Could weather be important? It turns out that both elevation and weather are only indirect factors; the critical factor is pressure (elevation and weather are both related to pressure).
Salt water changes the boiling point, and, more generally, the composition of dissolved gases or minerals make a difference, complicating, but also refining, our simple example even more.
That is just the temperature of boiling water. Imagine how complication grows exponentially when we are dealing with something complex, such as the human body, an ecosystem, climate, viruses, economics, or the universe. Karl Popper1, a 20th-century Austrian philosopher, defined the scientific method as a process by which any theory – simple or complicated – can be improved, increasing its integrity and utility as a result. The key to this process hinges on finding ways to break or disprove your theory, as opposed to seeking ways to confirm your theory. This seems counterintuitive and is often quite difficult for human scientists who are predisposed to protect and nurture their work. In fact, according to Popper, a theory can never be proven; only disproven.
Despite natural human resistance, Popper’s method is considered the most robust and direct way to improve scientific theories. Continual incremental improvement increases trust in predictions derived from those theories. The more detailed and specific the theory, the more useful it is, but the more uncertain it becomes. If your local meteorologist told you that it was going to rain some time in the next 200 days, he would be most likely be right, but the information would not be very useful. If he told you that it would start raining at precisely 7p.m. today, the uncertainty in that prediction increases significantly, but you can confidently prepare by bringing your umbrella when you head out to the theatre.
Is being right yet useless more important than taking a risk to be useful? The tradeoff presents a tricky balancing act that may have serious consequences.
It is harder to predict earthquakes than rain, yet the Italian government held six scientists accountable for failing to warn the public about the possibility of a deadly earthquake in L’Aquila in 2009. They were sentenced to six years in prison for manslaughter2 in 2012. Their conviction was overturned3 in 2014 by judges who cited the inherent unpredictability of natural events, and the natural caution of scientists reluctant to be wrong.
Scientists must be allowed – even encouraged – to fail in order to take the risks necessary to improve understanding. Playing it safe means never acting on uncertain information, which means never acting.
A large part of science involves deriving models to explain observed behaviour. Throughout history, there have been heated academic debates on how to explain natural phenomena with models of physical behaviour that fit observations. It is essential for scientific theories and predictive models to defend their integrity in a peer confrontational setting. This “stress testing” forces them to become better quicker and securely contribute to expanding human knowledge. It does not matter how many scientists agree on a theory – if just one discovers verifiable evidence that does not fit, the theory is wrong. Often the model can be improved to accommodate a new fact, but sometimes it needs to be abandoned altogether in favour of a more elegant solution. Science is harsh that way and is most certainly not a democracy.
What exactly constitutes a fact? A fact is something that has no uncertainty. It cannot be proven wrong. Therefore, by the arguments above, much of what we describe as “facts” are, strictly speaking, theories. In practice, in order to function and progress, we generally agree to accept some robust theories as being fact. However, there is a grey area – there is always a possibility that an observation will challenge that practical acceptance, so open-mindedness is perpetually necessary.
Much of science is a bootstrapped combination of theories and disciplines; one conclusion underpinning and justifying the development of another. The ultimate conclusion could therefore rest on a precarious tower of assumptions, bias, and interpretation, toppling spectacularly at the introduction of a new contradicting data point or theoretical modification at the base.
History is full of examples in which scientific consensus was proven wrong due to exactly this perilous predicament. In ancient times, virtually everyone believed in Aristotle’s view that Earth was the centre of the universe and that the entire unchangeable cosmos circled the earth. This belief also conveniently suited religious conviction at the time – that man held his rightful place at the centre of the universe, adding divine justification to observation.
There were uncomfortable red flags, though. Reluctant to abandon a model that had such high-powered patronage, some very creative mathematics had to be employed for the planetary data to fit the pre-ordained theory. Ptolemy successfully managed this in the 2nd century with his circles within circles system, incorporating necessary motion reversals4 (Figure 1). In the early 1500s, Copernicus logically deduced that all the observations elegantly fit a compelling model with the sun at the centre of the solar system (heliocentric) and orbited by the planets. Sixty years later, only a handful of astronomers in the world agreed with Copernicus.
Figure 1: Ptolemaic model of the solar system. Earth is at the centre.
Tycho Brahe’s (1546-1601) precise measurements of the planets and the path of a comet in 1577, and Galileo’s (1564-1642) new invention, the telescope, later confirmed the heliocentric model. This conclusion raised alarm in the Catholic church, and Galileo was brought before the inquisition and convicted of heresy for daring to suggest that the Earth did not hold a special place in the cosmos. He spent the rest of his life under house arrest, only exonerated by Pope John Paul II more than three centuries later in 19795.
In the 1950s, science warned the public that high dietary cholesterol caused serious heart disease. For the first half of the century, however, even though there had been indications by a few researchers that cholesterol was linked to cardiac health, atherosclerosis (buildup of fats, cholesterol, and other substances on artery walls) was widely considered a natural part of aging. As the anti-cholesterol movement grew in the scientific community, so, too, did the anti-butter-and-egg movement. In summary, cholesterol is not bad, then it is bad, now it is both good and bad.
Reluctance by the scientific community to accept the link between cholesterol and heart disease eventually led to an all-out attack on cholesterol. We now know that there are good and bad types of cholesterol, and that eggs contain almost entirely the good kind. So, we can go back to guilt-free enjoyment of the perfect soft-boiled egg . . . once we establish the temperature of boiling water in our neighbourhood.
In just the first half of 2020, Wikipedia listed 319 significant scientific events6, which included breakthroughs small and large in fields including genetics, astronomy, climate, artificial intelligence, archaeology, paleontology, biology, quantum physics, and time travel. Some of these advancements may be disproven quickly, but what this does prove beyond a doubt is that science is a constant humbling process of observation, investigation, and incremental adjustment. Science is never settled. No matter how difficult, scientists must keep an open mind. Imagine mountain people being called “boiling point deniers” by smug coastal dwellers.
The moral of this story: don’t trust the science; trust the scientific method. Our theories and models are never incontestable and will only improve when they are continually challenged.
In addition to science, there are multiple perspectives to every issue. Despite certain uncertainty, politicians and other decision-makers must weigh evidence, judge arguments, anticipate and mitigate future consequences (intended or otherwise), and, hopefully, make decisions that have a net positive effect for society as a whole. In an emotional world in which outrage is a click away and everyone is putting themselves and their special interest groups first, this is no easy task.
Science can and should inform politicians and help guide policy. Politics, however, and celebrity for that matter, should have no place in science (even though they always have and, let’s be realistic, always will have). With incomplete scientific information, it is very difficult to make decisions that are not affected directly by emotion and bias. So, let’s inform ourselves, try to be rational and respectful in our discussions, cut our politicians a bit of slack, and look at the BIG picture.
- Popper, Karl (2002) 1959. The Logic of Scientific Discovery, Routledge. ISBN 9780415278447