AI requires people-centric regulation to succeed: Cayton
Harry Cayton AI regulation
Artificial Intelligence has much to offer for good as well as for harm, and the need to regulate emerging AI technologies in some way has become apparent. In this article, Harry Cayton argues that instead of trying to regulate an entire international industry, AI regulation requires a precise approach that focuses on the people who create it and use it.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

The free-market economist Milton Friedman said, “We know who benefits from regulation when we see who argues for it.” This came to my mind when I read in a July issue of Ascend’s weekly regulatory news that tech companies are pushing for self-regulation. Of course they are.

The phenomenon of regulatory capture is well known. It means that the industry being regulated infiltrates their regulator and influences it to act in their interest rather than in the public’s interests. Industry and professions have the motivation, resources, and expertise to take over their regulator.

The public has none of those things. We see regulatory capture everywhere, particularly in those professional regulators which are still also trade associations, or where public utilities have been turned into private monopolies, such as in the U.K. water companies. Those companies have been drained by their shareholders (including Canadian public sector pension funds) of the reliable flow of monopoly income while the flow of clean water has become increasingly less reliable. The new chief executive of the largest company, Thames Water, is the former chief executive of the water regulator, OFWAT.

The tech companies, it seems, are aiming to get ahead of the game. With their pre-emptive strike, they show they don’t merely want to capture their future regulator –they want to own it.

All around the world we see jurisdictions struggling with how to regulate digital media in general and AI in particular. The U.K. government has already spent nearly seven years trying to get an immensely complicated and contested online safety bill through parliament. Percipiently, Friedman also observed “the Internet is going to be one of the major forces for reducing the role of government.”

Friedman died in 2006, so he never saw the growth of social media, but he was right in his recognition that governments would struggle to control this boundaryless, leaderless, accessible, diverse, amoral space and that people would embrace it enthusiastically for good or ill.

Global tech companies have economic power greater than that of many countries, and being multi-jurisdictional, they don’t need to care very much about regulation in the individual countries that contribute little to their income. Attempts by the U.K. government to break end-to-end encryption of messaging have been met by WhatsApp, Apple, and others with threats to close their U.K. accounts. Similarly, Canada’s law forcing Google and Facebook to pay providers for news has resulted in both saying they will stop news services, as indeed happened in Australia until a compromise was reached.

When governments start to take on the regulation of AI, chatbots, and self-managing robots they will face unprecedented challenge and unprecedented opposition. I don’t think we are even clear about who or what or where to regulate or for what purpose. Are we regulating to protect jobs, to safeguard the truth, to prevent crime, or to protect political systems and local economies? Tying down the giant that AI will grow into is no more likely to be successful than the Lilliputians were in tying down Gulliver.

It seems to me that instead of trying to regulate AI, as though AI was a thing, we should identify the specific risks of harm that AI creates and target our regulatory approaches to the prevention or mitigation of those harms. We can probably do a lot by applying existing regulatory frameworks rather than creating an entirely new one.

So, the first question is, as so often, what harm are we trying to prevent and the second, what outcome do we want? Fail to define these properly and regulation may create greater harms than those it is intending to prevent. As John Locke warned, “beware the danger of unintended consequences.”

Data protection legislation is an example of an international approach that has worked reasonably well. It is similar though not the same across different jurisdictions and IT companies have adapted to the different requirements to maintain their business, for instance setting up data storage in Europe to protect European citizens’ data from the U.S. Patriot Act. The economic model of social media is based on a symbiotic relationship between their services and data value. This may be one area where regulation can be developed and extended.

Similarly, existing employment legislation might be used to protect jobs or existing (if as yet undeveloped) approaches to hate speech and fake news might be extended.

Artificial Intelligence has much to offer for good as well as for harm. In my opinion, regulation needs to focus on the people who make it and the people who use it. They have moral agency; they make choices; they must take responsibility and be held accountable.

So, my contribution to the debate about regulating artificial intelligence is that instead of trying to regulate an entire international industry, we should be precise about the harms we are trying to prevent and direct existing tools towards mitigating those harms. In other words, we should regulate the outcomes of AI, not the inputs. The problem is that technological advances have always been ahead of regulators.

Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor. 

MORE VOICES ARTICLES

Thickening the Flak Jacket: Strategies for Successful Public Consultation in Regulation

In this Voices article, Ginny Hanrahan discusses the importance of thorough preparation for public consultations in regulatory work. It highlights the need for keeping regulatory tools updated and relevant in a rapidly changing world, using public input to shape effective regulations. Key challenges include new technologies, diversity requirements, and remote working impacts. Effective consultation involves careful planning, stakeholder engagement, and balancing diverse perspectives.

Read More »

Trust on trial: Navigating the murky waters of scientific integrity

As fraudulent research papers flood academic journals, the sanctity of scientific discovery is under siege, challenging the very foundation of trust we place in peer-reviewed publications. With AI now both a tool for creating and detecting such deceptions, the urgency for a robust, independent regulatory framework in scientific publishing has never been greater.

Read More »

Do regulators deserve deference? 

In a pivotal moment for regulatory law, the U.S. Supreme Court’s review of the Chevron doctrine could redefine the bounds of deference courts give to regulatory agencies, potentially inviting more challenges to their authority. This critical examination strikes at the heart of longstanding legal principles, signaling a significant shift in the landscape of regulatory oversight and its interpretation by the judiciary.

Read More »
Harry Cayton accountability in AI article

From Frankenstein to Siri: Accountability in the era of automation

As AI advances in sectors from health care to engineering, who will be held accountable if it causes harm? And as human decision-makers are replaced by algorithms in more situations, what will happen to uniquely human variables like empathy and compassion? Harry Cayton explores these questions in his latest article.

Read More »
Regulating joy

Regulating joy: The risky business of festivities

In his final Voices article of 2023, Harry Cayton reflects on our enthusiasm for participating in cultural festivities that often cause injuries or even deaths, which has led some governments to attempt to regulate these risky celebrations.

Read More »

Stay informed.

Get the latest news and views on regulation and digital government.

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Harry Cayton
Written byHarry Cayton
Harry Cayton is a sought-after global authority on regulatory practices who created the PSA and pioneered right-touch regulation. He is a regular Ascend contributor.

IN BRIEF

American Bar Association
ABA approves alternate licensure pathways: Weekly regulatory news

The Week in Brief is your weekly snapshot of regulatory news and what's happening in the world of professional licensing, government technology, and public policy.
This week in regulatory news: ABA approves alternate licensure pathways, prospects grow for WHA to approve updated WHO emergency rules, a look at the future of UK AI regulation, and more.