AI in regulation: Applications and use cases
0927-AI-Applications-banner-cropped
How can artificial intelligence be used to advance the daily work of regulation? In this Insight piece, we explore the potential applications of AI in rulemaking, adjudication, complaint handling, and other regulatory processes.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

Artificial intelligence – the application of computer science to imitate human thinking and judgment – is advancing at a disorienting rate. In the headlines, stories abound about AI art, chatbots, deepfakes, and other applications of the 21st century’s most revolutionary tech advancement. Of course, with these advancements have come calls to regulate AI from stakeholders in both the public and private sectors.

But amidst these debates, and amidst the proliferation of AI in the private sector overall, there is another narrative unfolding. This narrative – the application of AI in the public sector, particularly in the work of regulation – is perhaps less prevalent because it is still in a relatively early stage. Indeed, a 2019 report from the Organization for Economic Co-operation and Development (OECD) observes that in adopting the technology, governments have trailed substantially behind the private sector.

The exact extent to which AI is currently used in the public sector is somewhat unclear, but in recent years, there have been attempts to capture its prevalence in government operations around the world. In 2020, for example, researchers from Stanford University and New York University set out to determine the scope of AI implementation within the U.S. federal government, finding that 45% of federal agencies had experimented with AI and related machine learning tools.

Their study, Government by Algorithm: Artificial Intelligence in Federal Administrative Agencies, found AI had several core applications in public sector work, including:

  • Enforcing regulatory mandates.
  • Adjudicating government benefits and privileges.
  • Monitoring and analyzing risks to public health and safety.
  • Extracting information from large data streams, and
  • Communicating with the public about rights and obligations.

Though government AI technology ranges in complexity from conventional machine learning tools to more complicated “deep learning” systems, the study found that only 12% of public sector AI systems could be classified as highly sophisticated. It stands to reason that as long as agencies are using less sophisticated tools, they will be less able to fully leverage the potential of AI in revolutionizing their daily operations.

Interestingly, the study also argued that the technical capacity to increase AI proliferation in the public sector must come from within government agencies, rather than through third-party contractors. Researchers observed that in-house expertise helps to ensure AI tools are customized to meet the specific needs of government agencies and implemented in lawful, accountable ways.

In other words, even though the private sector is substantially ahead of the curve on AI, governments must not rely on it too much in building up their own capabilities. But what specific applications does AI have in public sector work? What are the use cases for government agencies to implement this technology? As it turns out, there are quite a few.

In her December 2022 column for The Regulatory Review, “A regulatory reboot cannot neglect artificial intelligence,” Professor Nicoletta Rangone provides examples of regulatory AI applications as well as recommendations for the OECD to support the careful and effective use of AI in rulemaking. Rangone’s contribution builds, in turn, on the OECD Regulatory Policy Outlook 2021, and offers deep insight on AI implementation by governments around the world. A handful of her examples will be discussed in this article.

Applications of AI

Adjudication

One realm in which AI offers promise is adjudication. Although most governments have not developed the capability to completely automate their adjudicative processes (or given themselves the legal power to do such a thing), AI can still be used to find evidence of regulatory non-compliance. France, for example, recently passed legislation that allows tax authorities to comb through social media data using AI to find evidence of tax fraud.

It is important, however, to be cautious of using AI in punitive processes, as algorithms do not always do their jobs perfectly. This is evidenced by the case of Netherlands Prime Minister Mark Rutte, who resigned in 2021 after investigations found his government’s algorithmic approach to addressing social benefits fraud led to 26,000 innocent families being wrongfully accused of fraud over the course of nearly seven years.

An automated approach to penalizing low-level fraud, motivated in part by a political effort against immigrants and their perceived effect on the social safety net, led to families being driven to financial ruin after being forced to pay back money they did not owe. The automated system tasked with flagging instances of fraud had, for example, been programmed to highlight people with dual nationalities as potential delinquents.

A similar event occurred in Australia between 2016 and 2020, during which time an algorithmic approach to collecting alleged overpayments from citizens was implemented with little human oversight, leading to the errant collection of $721 million from nearly 400,000 Australians, many of whom were already financially vulnerable. Both cases underscore the potential danger of AI in levying disciplinary actions without the appropriate level of transparency and human involvement.

Decision-making and complaint handling

As we have touched on in Ascend Magazine, AI has the potential to reduce noise – the unpredictability of human judgment – in the process of regulatory decision-making. In Anna Van Der Gaag’s March 2022 column, she referenced several works providing evidence that the technology, though currently underused in the public sector compared to the private sector, can help to reduce costs, assist with decision-making, and leverage administrative data more effectively than ever before.

Van Der Gaag, a professor of ethics and regulation, also discussed a study in which she and several other researchers attempted to use AI to distinguish high-risk complaint cases from low-risk complaint cases in the health care field. By the end of the study, the researchers had created a functioning and effective AI tool at proof-of-concept stage which could then be taken for further testing by individual regulators.

The study’s results highlight the potential of AI to reduce costs and improve quality in terms of decision-making when handling complaints. Indeed, many government agencies are already using AI to collect and process data regarding complaints, like the Bank of Italy, which in 2022 began using algorithms to organize client complaints and identify potential instances of non-compliance.

Rulemaking

AI can also be implemented in the creation and review of regulatory rules. For one thing, algorithmic tools can much more effectively scan and evaluate hundreds of thousands of regulations across varying jurisdictions to determine instances in which rules are not harmonized and identify cases in which new rules may be contradicting old ones.

Rulemaking is foundational to the work of regulation, and though it often requires discernment and consideration that can only be offered by humans, there is still quite a bit of statistical analysis that goes into the process, and this is perhaps where AI can come into play. Indeed, Government by Algorithm found that many agencies had already begun to incorporate AI into their rulemaking processes.

The Food and Drug Administration (FDA), for example, uses AI to conduct post-market surveillance, which involves collecting and monitoring millions of adverse event reports through its Federal Adverse Event Reporting Systems (FAERS). This, in turn, informs the administration’s rulemaking, standard-setting, and guidance efforts.

In the case of the FDA, the use of AI is directly shaping the agency’s regulatory approach, pushing officials toward prioritizing post-market surveillance – which is where the bulk of data can be collected, once drugs are widely distributed in the market – in addition to their more traditional focus on pre-market approval.

Professional licensing

As we have explored before, AI too shows promise in the world of professional licensing and credentialing. The sheer mass of data submitted for license applications and renewals, much of which has been submitted via paper-based processes, can create issues for regulatory staff, who are often tasked with converting this information to machine-friendly formats so that it may be digitally organized.

AI’s ability to recognize and interpret visual information submitted on forms can substantially transform the licensing process for regulatory agencies, making it simpler to create, maintain, and pull information from databases containing licensee information. This in turn can free up resources and staff to do the work of regulation that requires actual human judgment.

The future of AI in regulation

The potential for AI to transform regulatory processes cannot be overstated. Just as it is doing in the private sector, AI can help to automate a substantial amount of public sector work and allow agencies to redirect their resources more efficiently.

But, as we have observed, it must be done carefully, for implementing algorithms without human oversight or input may have consequences for regulators and the citizens they serve. If close attention is paid to the people, processes, and technology involved in the work of transformation, government agencies will be much better off as they implement AI and configure it to support their daily functions.

MORE VOICES ARTICLES

Trust on trial: Navigating the murky waters of scientific integrity

As fraudulent research papers flood academic journals, the sanctity of scientific discovery is under siege, challenging the very foundation of trust we place in peer-reviewed publications. With AI now both a tool for creating and detecting such deceptions, the urgency for a robust, independent regulatory framework in scientific publishing has never been greater.

Read More »

Do regulators deserve deference? 

In a pivotal moment for regulatory law, the U.S. Supreme Court’s review of the Chevron doctrine could redefine the bounds of deference courts give to regulatory agencies, potentially inviting more challenges to their authority. This critical examination strikes at the heart of longstanding legal principles, signaling a significant shift in the landscape of regulatory oversight and its interpretation by the judiciary.

Read More »
Harry Cayton accountability in AI article

From Frankenstein to Siri: Accountability in the era of automation

As AI advances in sectors from health care to engineering, who will be held accountable if it causes harm? And as human decision-makers are replaced by algorithms in more situations, what will happen to uniquely human variables like empathy and compassion? Harry Cayton explores these questions in his latest article.

Read More »
Regulating joy

Regulating joy: The risky business of festivities

In his final Voices article of 2023, Harry Cayton reflects on our enthusiasm for participating in cultural festivities that often cause injuries or even deaths, which has led some governments to attempt to regulate these risky celebrations.

Read More »
Regulator of tomorrow

Building my regulator of tomorrow with LEGO® bricks

What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

Read More »

Stay informed.

Get the latest news and views on regulation and digital government.

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Ascend Editorial Team
Written byAscend Editorial Team
Jordan Milian is a writer covering government regulation and occupational licensing for Ascend, with a professional background in journalism and marketing.