Share article

Data Dumb

We are increasingly being bombarded with numbers and simply told to ‘follow the data’.
Yet, we continue to suffer from an epidemic of statistical illiteracy.

In 2020, some 90 eminent scientists signed a letter to the governor of New South Wales, Australia, demanding the release of Kathleen Folbigg. She’s been in prison since 2003, having been found guilty of the murder of her four children. Pleas for her release have gone unheeded before, but this time it is hoped the weight of authority will be heard. Many of these scientists are pathologists who can provide sound medical explanations for the deaths of these children. Others are geneticists, who explain that two of the children exhibited a rare mutation known to be a cause of cot death.

More strangely though, others are statisticians. Why so? Because key to the prosecution’s case was the idea that the chances of all four children dying of natural causes was so statistically improbable as to be all but impossible. It’s a line of attack that has been used in courts around the world often before – against Angela Cannings in 2002, convicted of smothering two of her children; against Donna Anthony, who spent six years in prison after the death of her children; and, perhaps most famously, against the British solicitor Sally Clark, who served three years before being exonerated.

The problem? The prosecution’s claim – unbeknownst, it seems, to lawyers, judge and jury alike – is statistically illiterate. Indeed, in the world of data analysis it even has a name: the Prosecutor’s Fallacy is when the probability of innocence, given the evidence, is wrongly assumed to be equal to the infinitesimally small probability that the evidence would occur if the defendant were innocent. When it is explained, this becomes clear: Another case, for example, saw Dutch nurse Lucia De Berk sentenced to life imprisonment when it was argued that the chances of her being present at so many unexplained hospital deaths was 1 in 342 million. When statisticians looked at the data, they concluded that the sequence of events actually had a one in nine chance of happening to any nurse in any hospital.

Subscribe to FARSIGHT

Subscribe to FARSIGHT

Broaden your horizons with a Futures Membership. Stay updated on key trends and developments through receiving quarterly issues of FARSIGHT, live Futures Seminars with futurists, training, and discounts on our courses.

become a futures member

The ramifications of this statistical illiteracy – not just not understanding the numbers, but also how the numbers were reached – may be clearer than during the Covid-19 pandemic, during which we were bombarded with numbers and asked – as a justification for various extraordinary policies – to ‘follow the data’. Misinterpretation or misunderstanding of the data has also been cited as one reason why there was resistance to vaccination amongst some. This is hardly surprising given the state of basic numeracy: in the late 90s, one study found that 96% of high-school graduates were unable to convert 1% to 1 in 100 and 80% were unable to convert 1 in 1,000 to 0.1%. Some 46% were unable to estimate how many times a flipped coin would come up heads in 1,000 tosses – 25, 50 and 250 were the most common wrong answers. And a whopping 96% of the general population struggles with solving problems relating to statistics and probability. Small wonder that some have spoken of a collective phobia when it comes to thinking through data.

“We typically just don’t have the ability to ask the right questions of the data,” reckons Dr Niklas Keller, a Berlin-based cognitive scientist and co-founder of Simply Rational, a spin-off of the Max Planck Institute. “And this is especially true when not all actors present the data in a transparent way. We also struggle with societal aspects – the still widespread illusion of certainty, for example. Studies show that people tend to think of DNA and fingerprint tests as certain, but in fact they’re not. That just reflects our desire for certainty.”

But our statistical illiteracy is in part also a matter of the way our brains are wired – ready to assess data that is immediately relevant to our survival, not the data that now spills out of an ever more complex, interconnected world in which events are closely counted. Cognitive biases, such as framing, for example, lead most of us to conclude that a disease that kills 200 out of 600 people is considerably worse than one in which 400 people survive. We’re wired to find patterns, but that messes with our ability to make sense of data. We think in terms of small number runs and short-term trends, not the overall picture.

According to recent studies by Sam Maglio, assistant professor at the Rotman School of Management, University of Toronto, we even perceive momentum in data. If a figure – say, the percentage chance of rain – goes up, we assume the trend will continue. What’s more, changes in probability shape our behaviour in different directions even when those changes are identical: we’re more ready to take a punt on a potentially corked bottle of wine if the risk is reduced from 20% to 15% than if it increases from 10% to 15%.

“We’re meaning-making machines,” says Maglio, “and it’s tough to do the mental legwork that reveals that our conclusions about the numbers might in fact be otherwise. In a way we set ourselves up for failure because there’s so much data around now that we’re more inclined to use short cuts in assessing it. They get us to the right answers sometimes, but not all the time. But then if we properly thought through all the data we’re presented with now, we’d have no time for anything else.”

The celebrated historian of science Stephen Jay Gould also argued that our culture privileges feelings as more ‘real’ than thoughts, heart over intellect, with a particular contempt felt for statistics. “Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death,” as he quotes Hilaire Belloc.

But then, as Stefan Krauss, Professor of Mathematics at the University of Regensburg stresses, statistics are a relatively new idea: compared to other mathematical disciplines going back millennia, the probability calculus, notably, is a mere three centuries old. “And that’s really too late for it to be responded to naturally,” he argues. “You might even conclude that our need for probability is not that high, or we’d have formalised its use for much longer.”

The worrying aspect is that this data illiteracy reaches beyond the general public and into more specialist circles. It’s not just lawyers that don’t understand statistics – or, perhaps, wilfully misuse them. Some government officials don’t get them, either – in 2007, then New York mayor Rudy Giuliani defended his nation’s profit-led heath system by declaring that a man’s chance of surviving prostate cancer in the US was twice that of a man using the “socialised medicine” of the UK’s nationalised health service. But this is a misreading of the data that discounts different collection methods, which becomes obvious when the near identical mortality rates for both are considered. And it’s not just patients. Some of their doctors don’t get the data, either. Experiments by the Harding Center for Risk Literacy in Berlin, led by director Gerd Gigerenzer, have for example shown that only 21% of gynaecologists could give the correct answer regarding the likelihood of a woman having cancer, given certain basic preconditions about cancer rates (known to statisticians as ‘prevalence’), the probability of testing positive if she has cancer (‘sensitivity’), and the probability of testing positive even if she doesn’t have cancer (‘false alarm rate’). That 21% figure is worse than if the gynaecologists had answered at random. A 2020 study has likewise found that doctors overestimate the pre-test probability of disease by between two and 10 times. A JAMA Internal Medicine study this year found practitioners overestimating the probability of breast cancer by a huge 976%, and of urinary tract infection by a disturbing 4,489%. And positive tests send them into a tailspin of overestimation – they’re just not very good at interpreting the data, notably failing to account for concepts like sensitivity.

The same tendency towards overestimation can be seen in the way judges assess evidence, or in expectations of high stock market returns. For all that brokers estimate they’ll provide 40% returns – and their clients buy into this – the general US stock market has returned no more than 8% per annum before fees over a long period.

All this matters because without understanding data, we’re left open, not just to disappointment in our stock portfolio, or to medical mistreatment – which puts a lack of understanding of data well within the definition of an ethical problem – but also, as Gerd Gigerenzer has put it, “to political and commercial manipulation of [our] anxieties and hopes, which undermines the goals of informed consent and shared decision-making”.

“Statistics are everywhere. And statistical literacy is something we all attempt every day – we balance risk and probability all the time,” says Dr Simon White, senior investigator statistician at the University of Cambridge, and one of the Royal Statistical Society’s ‘statistical ambassadors’ – part of the organisation’s bid to improve statistical understanding. “It’s when the numbers get attached – to a mortgage, employment, our health – that they really hit home. But analysing all this data around us is really just a specialised branch of critical thinking. And the problem is poor critical thinking. When people think the numbers through, they can see their answer is ridiculous.”

Is there a solution to this widespread data illiteracy? Especially when even statisticians are split on the application of Bayesian thinking? That’s the increasingly commonplace, if not uncontroversial, process named after 18th-century Thomas Bayes. Essentially, it’s the idea that it makes sense, when assessing new data, to take into account the data we already know, thus using as much relevant prior information as possible – the ‘base rate’ – in making decisions. Any new info that challenges prior information reduces the probability of it being true – making for an ongoing cycle of evaluation and validation.

Certainly statisticians claim that key to any improvement is education. Some have argued that, as data becomes ever more part of our daily lives, statistical thinking should be privileged in schooling every bit as much as writing, literacy and basic maths (much of which focuses on hard concepts like geometry and algebra which rarely find application in the real world). Indeed, such statistical thinking, as the science fiction writer H.G. Wells proposed back in 1936, is indispensable for effective citizenship in a technological world. It has even been suggested that it is not part of general education precisely because most people don’t yet know what they don’t know – the fact that their data interpretation, and that of others, is so often wrong.

This statistical literacy also needs to be ramped up further in legal and medical training – in the latter case it could minimise often huge differences in diagnoses between consultants, for example. The mathematician Paul Lockhart has argued that it needs to be taught with some flourish, too, to stop it being perceived as boring, as it easily might. Currently, he says, it’s taught as one might teach music if it was all notation but never any actual sound. Great feats of mathematical thinking, he argues, need to be celebrated much as great art is.

As Mark Twain noted: “There are lies, damned lies, and statistics”. And certainly the media needs to play a part, too – and not just in its habit of misinterpreting the data to make over-confident forecasts, such as the probability of Hillary Clinton winning the 2016 presidential election, or of the UK voting against Brexit the same year. There is, of course, the click-bait temptation to re-interpret statistical information to create a more attention grabbing story. Relative changes make for more instant drama; absolute changes are often humdrum – though few people, outside of those trying to sell a new drug, know the crucial difference. Add in the fact that journalists typically have no training in statistics either, and there’s a recipe for yet more misunderstanding.

“Assessing data is so often a question of what data you’re given, the context, how the data is handed down from the experts, through PRs, through the press,” explains Stefan Krauss. “And all that’s exacerbated by the fact that, in particular, probabilities per se are not intuitive and are hard to visualise. If you say that the probability of rain tomorrow is 30%, for some that will mean 30% of the time, for others over 30% of an area… You have 30%, but 30% of what?”

Indeed, there is also the matter of how data is presented. As Keller illustrates, the Roman numeral system basically made it impossible to multiply or divide in one’s head – you carried an abacus for that. “And if we had the same system [in use] today, it would probably be argued that we have a ‘multiplication bias’. More often than not it’s the way [statistical] information is presented [that’s the problem], not the mind of the actor,” he says.

It didn’t help, then, that the mass media often portrayed data about Covid-19 deaths, for example, without context and by using logarithmic graphs, as opposed to linear ones, even though recent experiments by Alessandro Romano and colleagues suggest that most readers don’t understand them. Only 41% of respondents could respond correctly to a basic question about a logarithmic graph, relative to 84% using a linear scale. What’s more important are the ramifications: Respondents looking at a linear scale graph will have different attitudes and policy preferences towards the pandemic than those shown exactly the same data on a logarithmic one.

“The fact is that if you do work with data it’s unreasonable to expect people who don’t to grasp it in the same way, yet that’s what we do all the time,” says Romano. “Finding new ways to present data matters, because while data has always been important, now, understanding it, and understanding that it’s always selling a message, is crucial to understanding the world – which in turn is making it obvious just how much most people don’t get it. And my hunch is that even people who think they understand data, do so less than they think they do.”

Indeed, expressing data, and probabilities in particular, as percentages tends to be confusing for many, but expressing them in terms of what are called natural frequencies – i.e. not 80% of people, but 8 out of 10 people – proves much easier for our brains to grasp and remember. Another recent study shows that performance rates on statistical tasks increased from 4% to 24% when using a natural frequency format. That’s also true when statistics using natural frequencies are presented to doctors and judges – reasoning improves massively.

As for the other 76% of people – studies by Stefan Krauss have found that they have a frustrating tendency to convert such figures back into more complex probabilities. And then get the wrong answer. That is, of course, if natural frequencies have been used at all. Yet this rarely happens – both because scientists who work with such data tend to think of natural frequencies as being a less serious form of mathematics, and because teachers, too, are rarely trained to think in terms of natural frequencies.

Take, for example, the way the effectiveness of cancer screening has been presented in the UK, as providing a “20% mortality reduction”: which is typically interpreted, wrongly, as meaning that 20% of women are saved by undergoing the procedure. The real figure is about one in 1,000 – four out of every 1,000 screened women die from the disease, compared to five out of every 1,000 unscreened women. Sure, that’s a 20% difference, but putting it that way hardly gets to the truth of the matter.

The consequences of this slipshod handling of data can be profound. In 1995 the UK’s Committee on Safety in Medicines warned that a new oral contraceptive pill doubled the risk of thrombosis. Thousands of women consequently came off the pill – and the following year a spike in unwanted pregnancies was recorded, as well as an estimated 13,000 additional abortions. But that alleged ‘doubling’? It amounted to an increase in risk of from one in 7,000 to two in 7,000. All too often, an assessment of the relative risk – how it gets bigger or smaller – leaves out the size of the risk to start with. Putting it in absolute terms gives a very different picture. We need more of that simplicity.

“We do need to address this issue – how statistics are handled, and our ability to interpret them,” says Simon White. “This isn’t easy – there’s even a linguistic factor in our understanding of numbers. But it’s not just a matter of statistical literacy being central to that ideal of the engaged citizen. It’s that the world is complicated, so statistics are everywhere. Deciding where to put charging points for electric vehicles, for instance – that’s a statistical problem. We can’t just have those decisions based on data right up to the point where someone just decides the answer is ‘on every corner’.”