Thanks for submitting!

  • Jo Clubb

Do you change your mind when the facts change?

During the Coronavirus pandemic, the scientific community came under attack from some quarters for changing their recommendations. In March 2020, Dr Anthony Fauci was quoted as saying “there’s no reason to be walking around with a mask”. Well we all know how that changed. To some, this may be concerning and lead them to question; “how can we believe scientists if they keep changing their minds?!

This however, is the very essence of science – to update beliefs based on new evidence. It is fundamental to the scientific process. When advice changes, it is a sign that scientists are responding to new knowledge. As Shannon Palus from Slate explains, such flips are "a sign that we know more, and that experts and institutions are responding to new information. In a fast-moving situation, advice that has been updated can in fact be more trustworthy.”

"When advice changes, it is a sign that scientists are responding to new knowledge."

The well-known cartoon (below) of what people think success looks like vs what is really looks like could quite as easily describe science. Our knowledge forms and shifts based on an ever-changing compilation of findings.

That is opposed to those who may be entrenched in their beliefs and chose to ignore or deny new evidence. People or stories who appear to display a one-way, linear development of information are those we should meet with scepticism.


"Science is the quantification of doubt" - Henry Gee

Doubt seems to be to be construed as a negative trait in today’s society. You must be certain of your point in a job interview, during a presentation, in an article, on social media etc. There is no room for hesitation or scepticism.

Yet, doubt is the bedrock of science and "a critical feature of wisdom". Without doubt, perhaps we would still believe that our planet is flat and the centre of the universe. New data is always emerging and with that, we must adjust our theories based on these findings.

In London’s West End in 1854 there was a cholera outbreak. The leading theory at the time was the miasmatic theory that contended the disease was caused by bad smells transmitted in air. A leading health scientist, William Farr, had data that seemed to support the theory.

An alternative cause was postulated by Dr John Snow (not THAT Jon Snow). He doubted the miasma theory and collected data on every person infected, which highlighted a common denominator – the victims were sourcing their water from the nearby Broad Street pump. He postulated that a germ cell was transmitted via drinking water – an accepted theory today but unheard of then.

Despite the new evidence, he was met with scepticism. In fact, it took public officials years to believe him and make adjustments to the pump and drinking water. Dr Farr, appointed to the Scientific Committee for Scientific Enquiries in Relation to the Cholera Epidemic of 1854, concluded in a report:

"But, on the whole of evidence, it seems impossible to doubt that the influences, which determine in mass the geographical distribution of cholera in London, belong less to the water than to the air."

Of course, in this instance that doubt was very possible! While it can be concerning or overwhelming to consider new evidence that throws shade on a long-held belief, it is integral to the scientific process. Eventually, in 1866, 12 years after the outbreak and 8 years after the death of Dr Snow, Dr Farr did acknowledge in writing that water, rather than miasmata, was the most important means of transmission during the cholera epidemic.


A scientist is never certain... our statements are approximate statements with different degrees of certainty: that when a statement is made, the question is not whether it is true or false but rather how likely it is to be true or false.” – Richard Feynman

There is a perception that science is black and white, that it proves what is right and wrong. However, this could not be further from the truth. Uncertainty is inherent in nature. Hence, measurements in science are not reported as single values, but as ranges or with ± added to acknowledge the error or variation associated with that value. A confidence interval attempts to quantify the (un)certainty, referring to "the probability that a population parameter will fall between set of values for a certain proportion of times." (Hayes, Investopedia).

While far from perfect, the use of a P value is designed to acknowledge the possibility of error or chance, with P<0.05 representing less than a 1 in 20 chance that the finding is wrong. The P itself stands for Probability (specifically Calculated Probability). Everything in science is (or at least, should be) based on probability. Even scientists need reminding of this though. In 2016, the American Statistical Association (ASA) wrote an article attempting to clarify P values, highlighting that a conclusion “does not immediately become ‘true’ on one side of the divide and ‘false’ on the other” (Grabowski, 2016).

The ASA go on to encourage the use of alternative methods that emphasise estimation over testing, such as Bayesian methods and likelihood ratios. While this post is not going to explore in detail the supporting and opposing arguments for various statistical approaches, it highlights the need to incorporate measures of certainty into our statistical analysis and interpretation.

Let’s Talk Performance!

When the facts change, I change my mind. What do you do, sir?” – John Maynard Keynes

So where does this doubt and uncertainty leave us? Well, not only is it acceptable to doubt that a single measure of training load or a testing protocol can predict injury, it is our scientific responsibility to do so. We should be doubtful of marketing claims, of emerging technologies, of sensationalised scientific headlines in the media, even of the findings presented in a journal article. Doubt is our strongest ally against the influence of promotion, pseudoscience, and FAKE NEWS in the twentieth century. We should doubt our interventions and actively consider the potential harm they may do.

However, maintaining doubt alone is not enough, we must seek evidence to try to improve our certainty. We must read with a critical approach. We should be open to discussion and rethinking, such as recent debates on the appropriate use of the terms "worst-case scenario" and "training load". We can try to understand the validity and reliability of a measure and incorporate the calculation of the signal vs the noise, in an attempt to quantify certainty. Just this month, another paper from Franco Impellizzeri and co highlighted how methodological approaches can affect results, in this case casting uncertainty on prior findings relating to the Nordic Hamstring Exercise (and neatly discussed further here by Patrick Ward).

Conversely, we also cannot be crippled by our own uncertainty. Too much doubt and uncertainty may leave us frozen, undecided on our conclusion and uncommitted to an avenue to proceed down. Similarly, overwhelming key stakeholders with uncertainty will only add to their stress, which can hijack their own decision making. Having spent more than a decade in professional team sports environment, I understand the pressure to provide key stakeholders with a clear, ideally dichotomous, and potentially reductionist outlook.

What remains is the need to maintain doubt and uncertainty within our own thought and analysis process. We can use a critical mindset when consuming or analysing information. We can use statistical approaches to help quantify certainty. We can filter out the noise and present a concise and actionable interpretation to the decision makers we are attempting to support.

While the world yearns for certainty, it is our duty as scientists to maintain doubt, consider the (un)certainty, and update our beliefs on the basis of new evidence.

In reality, the more we discover, the more we realise we don't know. Science is not so much about knowledge as doubt.” - Henry Gee, The Guardian