AI in regulation: Can it create less noise?
Anna van der Gaag
As we have seen in many industries, AI carries enormous potential. But can it carry over to the world of regulation? Anna van der Gaag's work has taken her deep into the intersection of technology and regulation. In this article, she explores exciting research findings on AI in a regulatory context and shares some encouraging signs.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

One of the less discussed impacts of lockdown life is that it opened up so many opportunities for remote learning. Stuck at home, bereft of face-to-face contact, replaced by computers for company, we have used them more than ever to feed our curious nature.  

In May last year, I logged on to the wonderful International Hay Festival and listened to Daniel Kahneman, Cass Sunstein, and Oliver Sibony in conversation about their new book, Noise. This is a book about variability in human judgment, and how to bring more ‘hygiene’ into human decision making; reduce unwanted variation; and create a less ‘noisy’ society where there is less error, less harm, and more fairness. Sibony suggested that the application of simple rules and algorithms can actually reduce noise but warned that progress was slow because, as a society, we tend to be unforgiving of AI-driven decisions and, by contrast, very forgiving of our human errors.  

New technology tends to create fear of one kind or another, and perhaps nowhere more so than on the internet. Sinan Aral writes eloquently about the promise and perils of new technology, from the spread of misinformation to targeted tight-knit communities, to raising millions of dollars in a matter of minutes for individuals, communities, and nations struck down by misfortune. Aral calls for a National Commission on Technology and Democracy in the U.S., to help raise the debate, to create online ‘guard rails’ to protect people from harm. Technology, he writes, is not good or evil in itself; it is the content, and how it is used that matters, and it must be regulated. If the future is more, not less, connectivity for humanity, let us reflect on what kind of future we want to create.  

AI has enormous potential to help regulators reduce costs, improve quality, unleash power of data 

What do these debates about noise or National Commissions have to do with AI and regulation?  

Regulators, like other government agencies, are just at the starting blocks of applying algorithms to their work, despite the fact algorithms are everywhere – in our entertainment, our retail choices, and our communications. Every large corporation you can name is using algorithms to improve their design, marketing, processing, and delivery functions.  

In a thoughtful piece called Government by Algorithm, David Engstrom and his colleagues at Stanford and NYU School of Law explore the reasons why government agencies have largely been behind the curve compared with private corporations when it comes to exploring the use of these tools. The potential to reduce costs, improve the quality of decisions, and unleash the power of administrative data is increasingly being recognized.  

We are moving algorithms from the shadows and onto the main stage, but to do this, government agencies must do two things. Firstly, they must increase their in-house technical capacity, namely hardware, software, and data scientists. Secondly, they must address the accountability challenges which AI poses. Achieving the first will cost money. Achieving the second means bringing transparency into the process, showing how decisions are made. If we get this right, says Engstrom, we may actually render enforcement decisions more traceable than dispersed human judgments. In Sibony’s words, if we get this right, we create less noise.  

Testing AI in a regulatory context to help with complaints  

In 2019, the National Council of State Boards of Nursing (NCSBN) funded a small team of regulators, lawyers, computer scientists, social scientists, and ethicists from the U.K. and Canada to test whether or not we could get this right in a health regulatory context. As far as we know, this was the first time a university research team embarked on building an algorithm for use in health complaints handling. We were fortunate to work with three regulatory bodies in three jurisdictions: the U.S., Australia, and the U.K., all of whom were interested in the proposition that AI might be able to help them in their decision-making processes – to quality assure human judgement at the early stages of decision making about a complaint.  

Our earlier studies showed us the majority of complaints to health professional regulators resulted in no regulatory action. We know that regulatory data points to a small number of high-risk individuals, and culture and context as key to maintaining safety. Our aim was to bring more nuance into the decision-making process, and to design a tool that could distinguish between high-risk and low-risk cases; a decision-support tool which would then be compared with human judgement on each case. 

With the transparency principle in mind, we involved regulatory staff in the design and testing of the tool, granting regulators access to the codes so they could understand how the tool arrived at risk predictions. In addition, the tool was designed to provide comparisons with previous similar cases, and to cross reference to the regulatory rules or codes that related to the case. Feedback from those who worked with us was positive. The case managers we worked with could see the potential of the tool to expedite decision making, improve their individual and collective confidence in the consistency of decisions, and reduce the stress associated with the day-to-day work. By the end of the project, we had achieved a tool at proof-of-concept stage, ready to be taken forward in each regulator for further testing.  

We have some way to go, but we are confident this work brings health regulators a step closer to two of three ambitions explored in Engstrom’s paper on the future for AI in the work of government: reduce cost and improve quality. The third, unleashing the power of collective data, is some way over the next horizon. But there is promise of less noise along the way. 

Acknowledgements  

The work described in this piece was funded by the Centre for Regulatory Excellence of the US National State Board of Nursing. The Journal of Nursing Regulation is publishing the first peer reviewed paper on our work last fall. The interdisciplinary team included Robert Jago, Kostas Stathis, Ivan Petej, Piwayat Lertvittayakumjorn, Yamuna Krishnamurthy and Michelle Webster at Royal Holloway University of London; Ann Gallagher, University of Exeter; Zubin Austin, University of Toronto; and the staff of three regulatory bodies at the Texas Board of Nursing, the Nursing and Midwifery Council UK, and the Australian Health Practitioner Regulatory Authority. I am grateful to all my colleagues and collaborators. 

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Anna van der Gaag
Written byAnna van der Gaag
Anna van der Gaag is a Visiting Professor of Ethics and Regulation at the University of Surrey, with an interest in what makes and breaks good care. Former Chair of the Health and Care Professions Council, she currently chairs the Midwifery Panel at the Nursing and Midwifery Council and the Advisory Board on Safer Gambling and works with health professional regulators in a range of other jurisdictions on disruptive innovation, quality improvement and maintaining good governance.

IN BRIEF

Week-in-Brief-Banner-Dec-1-2023
Alberta
Alberta physicians criticize plans to subsidize nurse practitioner clinics: Weekly regulatory news

The Week in Brief is your weekly snapshot of regulatory news and what's happening in the world of professional licensing, government technology, and public policy.
This week in regulatory news, professional communities clash over plans to publicly fund nurse practitioner clinics in Alberta, California considers an alternative pathway to licensure for lawyers, and much more.