Two engineers are considering a design for an upgrade to a bridge to allow it to withstand stronger winds. They analyse the design independently using the same method and simultaneously estimate that it will enable the bridge to withstand wind speeds of up to 130mph (210kmh): adequate for the local area, where the wind never exceeds 100mph.
However, the more astute of the engineers considers the level of uncertainty of the findings and determines that the value of 130 is accurate only to ±40. This means that we can’t be certain that the bridge would withstand a wind above 90mph (although it might manage up to 170mph). The astute engineer, therefore, advises further analysis, while their more naive colleague approves the design, unknowingly risking calamity.
I use a similar story to teach my students about the significance of uncertainty analysis, the scientific process for handling imprecise data. In truth, however, even science and engineering students don’t find the concept especially exciting. Unsurprisingly, then, the public, politicians and the press usually overlook uncertainty completely.
It doesn’t help that when findings are disseminated outside academia, margins of error are often omitted entirely for the sake of brevity and impact. The result is that laypeople might not realise that a paper is reporting only preliminary findings. For example, without scientific training, it might not be apparent that a new medication has a large range of possible efficacies or is useful only to a given demographic.
This state of affairs can’t be allowed to continue. It is true that just as politicians did not enter politics to study uncertainties, most scientists and engineers did not enter their profession to engage in policy formulation or public engagement. However, it is becoming ever more critical in this age of widespread mis/disinformation for academics to help outsiders to better grasp which claims bear rigorous scrutiny.
My proposal is that scientists and science communicators should include a simple, prominent statement about the level of confidence for each public-domain output (paper, press release, oral presentation) they produce.
This might have minimal impact on those deliberately spreading misinformation, who will conveniently overlook the confidence statement if it fails to support their position, but at least it might stop some of the more reputable media from making gross extrapolations from papers that are provisional and exploratory.
The stated confidence level should not be nestled deep inside the core text of a paper. It should be presented at the start and be specifically aimed at general readership – including guidance on how far it is reasonable to extrapolate the findings.
One option is to allow for a free-text statement up to a strict word limit. For example: “The findings of this research attain a high degree of confidence within the remit of the study. However, there are highly restricting assumptions, so widespread adoption of the technique and findings require substantial further research and independent corroboration.” Alternatively, the author might select from a list of options, perhaps akin to the nine levels of proof recognised in US legal standards, ranging from “some evidence” to “beyond reasonable doubt”.
It would be important to stress that lower-confidence papers are still potentially highly valuable. Such contributions would be seen as perfectly valid within the scientific community, and further research would hopefully build on them, either increasing the confidence level or finding pitfalls.
Part of the challenge is that, aside from the inability and unwillingness of politicians and others to respect uncertainty, scientists are, themselves, flawed individuals. As an entity, science has evolved for optimal handling of uncertainty. But in practice, scientists are human with selfish needs like everyone else.
When someone boldly claims high confidence, they are inviting greater scrutiny, provoking others to repeat and corroborate or disprove their findings. Even so, they may be tempted to fraudulently claim high confidence to get a paper published and entice press attention, thus attracting many followers. Past events such as the false claim that the MMR (measles, mumps and rubella) vaccine causes autism suggest that fortune favours those who attract followers irrespective of whether their research is later discredited.
At the same time, even high-confidence papers should not be communicated as “pure fact”, any more than “beyond reasonable doubt” entails that miscarriages of justice are impossible. There should be no shame in perfectly honest research that initially had high confidence being later disproven. Science operates that way.
A contrary problem is that some authors might be too nervous to claim that their work has a high degree of confidence. So perhaps paper reviewers should be involved in the classification of confidence levels. Alternatively, expert panels could label some of the more publicised papers.
None of this is easy, and there are other pitfalls to confidence labelling that I don’t have space to address here. This article is just a conversation starter: we might need several iterations to get the solution right. But if the pandemic taught us anything, it is that it is vital for the confidence of scientific findings to be better articulated to the public. Prominent statements on uncertainty might be at least part of the solution.
Gary A. Atkinson is associate professor and BEng robotics programme leader at the University of the West of England.
Register to continue
Why register?
- Registration is free and only takes a moment
- Once registered, you can read 3 articles a month
- Sign up for our newsletter
Subscribe
Or subscribe for unlimited access to:
- Unlimited access to news, views, insights & reviews
- Digital editions
- Digital access to THE’s university and college rankings analysis
Already registered or a current subscriber? Login