As I was writing this piece at the end of January, media reported that there had been 29 mass shootings in the U.S. with 70 dead during the first three weeks of 2023. No, this is not a polemic in favor of gun control (although it could be). It is a meditation on how we perceive and act on the risk of harm.
Regulation, in Malcolm Sparrow’s phrase, is about the prevention of harms. To prevent harms, we need to understand how they might happen; we need to understand risk.
A conventional and useful risk register has two metrics: the severity of the harm if it happens and the likelihood of it happening. The more severe the harm, the greater the need for prevention. Minor and rarer harms may reasonably be ignored. But what about rare but very severe harms? Surely they cannot be ignored, despite their improbability? Or minor harms that affect large numbers of people – their individual effect may be trivial, but cumulatively they may cause significant harm.
It seems, however, that we don’t really see things that way. Our perception of risk in many parts of our lives differs from the actual risk of harm, and this distorts regulatory policy.
Research carried out at Swansea University in Wales looked at our psychological attitudes to certain risks and, in particular, the public’s attitudes to risk associated with driving cars. The researchers found that people tolerated a higher level of risk associated with driving than with other forms of transport: 61% of people agreed that risk was “a natural part of driving,” but only 31% agreed with the statement when driving was changed to “going to work.”
Similarly, 75% of people agreed with the following statement: “People shouldn’t smoke in highly populated areas where other people have to breathe in the cigarette fumes.” But when only two words were changed, modifying the statement to, “people shouldn’t drive in highly populated areas where other people have to breathe in the car fumes,” only 17% agreed. The researchers point out that people are using different standards when expressing their tolerance of risks of harm caused by motoring as compared with other activities.
This difference in our attitudes to risk, as contrasted with our understanding of potential or actual risk, has a significant impact on public policy. People are not saying that motorcars do not cause harms, but that a level of harm arising from motoring is more acceptable than the same harms (death, injury, lung diseases) arising from other causes.
When policymakers and regulatory bodies themselves assess risk, they talk about potential risks (something might happen) and actual risks (something has happened and most likely will again). What we rarely consider, when making choices about which risks to mitigate and why, is our own biases as members of the public, our willingness to tolerate or fear certain causes of harm over others.
Even if, as policymakers, we overcome our own biases and try to make the right decision based on a rational risk assessment, resistance from the public is hard to overcome.
Convenience trumps harm.
So why does our tolerance of risk, in some circumstances but not all, bear little relation to the actual risk of harm? I suggest it is because our approach to risk of harms is more cultural than scientific. What or who we choose to regulate and to what extent is driven as much by sentiment, lifestyle, and politics as it is by facts. Which brings me back to mass shootings in the United States.
The rest of the world looks on in amazement at the tolerance of Americans for gun violence. That such violence is merely a regrettable fact of life (or death) seems to be the prevailing view. Just as harm from driving has been traded off against convenience so, as Professor Sonali Rajan of Columbia University says, Americans “have long normalized mass death in this country. Gun violence has persisted as a public health crisis for decades,” noting that an estimated 100,000 people are shot every year and some 40,000 will die. “Gun violence is such a part of life in America now that we organize our lives around its inevitability.”
Regulatory policymakers need to take account of potential risks of harm, actual risks of harm, and their own and others’ perceptions of the priority of action against the causes of harm. “How many dead children are acceptable?” they should ask themselves. An uncomfortable question, but if the answer is “none” then it is us, not the regulators, who are to blame for inaction.
Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor.
MORE VOICES ARTICLES

Building my regulator of tomorrow with LEGO® bricks
What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

‘Thin’ and ‘thick’ rules of regulation: Cayton reviews Daston’s history of what we live by
Lorraine Daston explores fascinating examples of rulemaking throughout history in her new book, ‘Rules: A Short History of What We Live By.’ In this article, Harry Cayton discusses what regulators can learn from Daston’s work.

Regulation in financial services: Is it more effective 15 years after the global financial crisis?
The lessons learned in the aftermath of the 2008-09 global financial crisis led to changes in regulation around the world. Fifteen years after the onset of the crisis, Rick Borges reflects on the effectiveness of regulatory measures put in place to prevent a similar catastrophe from occurring in the future.

Blame games: How these nurses’ shocking crimes highlight regulatory weaknesses
The recent sentencing of British nurse Lucy Letby has left members of the public, media, and medical community calling for more regulation. In this article, Harry Cayton examines the response to Letby’s crimes and what it highlights about the limits of professional regulation.

Do chatbots understand you? Exploring bias and discrimination in AI
To what extent does AI have the potential to exhibit bias and discrimination? And how might humans implement the technology in a way that curbs these tendencies? In his latest piece for Ascend, Rick Borges discusses the ethical implications of widespread AI implementation and explores what could be done to address them.

AI requires people-centric regulation to succeed: Cayton
Artificial Intelligence has much to offer for good as well as for harm, and the need to regulate emerging AI technologies in some way has become apparent. In this article, Harry Cayton argues that instead of trying to regulate an entire international industry, AI regulation requires a precise approach that focuses on the people who create it and use it.