Benjamin Bestgen: Statistics and law
Benjamin Bestgen considers the pitfalls of statistics in his latest jurisprudential primer. See last week’s here.
Being falsely convicted for murdering one’s children is likely amongst the worst experiences any person can have. In 1998, solicitor Sally Clark was convicted of the murders of her two babies, each dying a few weeks after birth at home. The defence’s assertion of Sudden Infant Death Syndrome (SIDS) was rejected after a medical expert for the prosecution argued with the help of statistics that murder was more likely. The conviction was upheld on appeal in 2000. Mrs Clark served about three years of her sentence. Only a second appeal, challenging the statistical evidence and other issues with the prosecution’s case, led to the conviction being quashed. Mrs Clark’s position as solicitor and conviction for child murder made her prison time particularly difficult. She never recovered from her complex trauma, developed psychiatric problems, alcohol dependency and died in 2007.
Common wisdom says lawyers and law students suck at maths, though at least one study from 2013 suggests this may not be entirely true. But numeracy skills aside, an appreciation for logic and asking the right questions is arguably a must-have in the profession. Sound reasoning involves the ability to be less prone to succumb to cognitive biases, logical fallacies or framing effects.
In the Clark case, the prosecution’s expert witness relied on what he called “a rather crude aphorism but a sensible working rule” that one cot death is a tragedy, two is suspicious and three is murder until proved otherwise. He presented self-made statistics to claim that the probability of two cases of SIDS in an affluent, non-smoking family like the Clarks was exceptionally low (1 in 73 million), implicitly suggesting murder was more probable. His probability analysis was decisive in Mrs Clark’s conviction, seemingly convincing the jury “beyond reasonable doubt”.
Statistics is a scientific field concerning the collection, analysis, interpretation, explanation and presentation of data. Each stage – from collection to presentation – is a complex sub-discipline on its own. What a statistic can really tell us depends on what exactly was done, assumed, in- or excluded, fact-checked and reasoned at every stage of the statistic’s creation. And while professionals like doctors normally learn some statistics, they are generally not expert statisticians. Neither was the prosecution’s expert in Clark: in 2001, The Royal Statistical Society felt compelled to publish a statement of concern, declaring his statistical approach invalid and its interpretation prone to logical errors, including the Prosecutor’s Fallacy.
When considering the likelihood of an event with the help of probabilistic reasoning, a core problem is how to communicate and interpret uncertainty. Statistician Nicky Best and colleagues (2012) presented a statistical model to help clinicians calculate the likelihood of child abuse in infants who experienced an acute life-threatening event (ALTE) in combination with nosebleeds. They concluded that estimating the probability of a child having been abused versus alternative innocent explanations is far more uncertain than expert advice or common literature on the subject suggests and depends heavily on assumptions made in individual cases.
On their best estimate, the probability that an infant presenting with ALTE and nosebleed was abused lies between 15 per cent and 26 per cent, depending on the assumptions made. They also flag a non-negligible possibility of abuse as low as three per cent or as high as 51 per cent, depending on certain statistical percentiles the interpretation of the data focusses on. They conclude that while their model allows a clinician to estimate whether a child presenting with ALTE and nosebleed was abused (e.g. versus a child with ALTE but without nosebleed), a wide margin of uncertainty certainly admits reasonable doubt. Statistical uncertainty must therefore always be considered when presenting statistics in court or explaining them to a jury.
Disciplines like Jurimetrics or Law and Economics try to analyse legal issues using statistical, probabilistic and microeconomic methods. Such methods and quantitative results achieved can be useful but arguably put additional ethical and professional obligations on the experts generating and explaining their work in court, advising financial institutions or the media. The prestige and perceived objectivity of mathematics and statistics in society is high but comparatively few can fully understand or readily challenge them. The risk to be misled by numbers or drawing unsound inferences from them is considerable.
Ultimately, statistics are not empirical facts. They are considered opinions of the people who generate them, ideally using their best available data, methods and judgement. As lawyers, we should remind ourselves that empirical evidence remains best: cases shouldn’t be decided on opinions, no matter how authoritatively they may present.
The author thanks Mr Warren Simmons for inspiration and discussion of this article.
Benjamin Bestgen is a solicitor and notary public (qualified in Scotland). He also holds a Master of Arts degree in philosophy and tutored in practical philosophy and jurisprudence at the Goethe Universität Frankfurt am Main and the University of Edinburgh.