The free-market economist Milton Friedman said, “We know who benefits from regulation when we see who argues for it.” This came to my mind when I read in a July issue of Ascend’s weekly regulatory news that tech companies are pushing for self-regulation. Of course they are.
The phenomenon of regulatory capture is well known. It means that the industry being regulated infiltrates their regulator and influences it to act in their interest rather than in the public’s interests. Industry and professions have the motivation, resources, and expertise to take over their regulator.
The public has none of those things. We see regulatory capture everywhere, particularly in those professional regulators which are still also trade associations, or where public utilities have been turned into private monopolies, such as in the U.K. water companies. Those companies have been drained by their shareholders (including Canadian public sector pension funds) of the reliable flow of monopoly income while the flow of clean water has become increasingly less reliable. The new chief executive of the largest company, Thames Water, is the former chief executive of the water regulator, OFWAT.
The tech companies, it seems, are aiming to get ahead of the game. With their pre-emptive strike, they show they don’t merely want to capture their future regulator –they want to own it.
All around the world we see jurisdictions struggling with how to regulate digital media in general and AI in particular. The U.K. government has already spent nearly seven years trying to get an immensely complicated and contested online safety bill through parliament. Percipiently, Friedman also observed “the Internet is going to be one of the major forces for reducing the role of government.”
Friedman died in 2006, so he never saw the growth of social media, but he was right in his recognition that governments would struggle to control this boundaryless, leaderless, accessible, diverse, amoral space and that people would embrace it enthusiastically for good or ill.
Global tech companies have economic power greater than that of many countries, and being multi-jurisdictional, they don’t need to care very much about regulation in the individual countries that contribute little to their income. Attempts by the U.K. government to break end-to-end encryption of messaging have been met by WhatsApp, Apple, and others with threats to close their U.K. accounts. Similarly, Canada’s law forcing Google and Facebook to pay providers for news has resulted in both saying they will stop news services, as indeed happened in Australia until a compromise was reached.
When governments start to take on the regulation of AI, chatbots, and self-managing robots they will face unprecedented challenge and unprecedented opposition. I don’t think we are even clear about who or what or where to regulate or for what purpose. Are we regulating to protect jobs, to safeguard the truth, to prevent crime, or to protect political systems and local economies? Tying down the giant that AI will grow into is no more likely to be successful than the Lilliputians were in tying down Gulliver.
It seems to me that instead of trying to regulate AI, as though AI was a thing, we should identify the specific risks of harm that AI creates and target our regulatory approaches to the prevention or mitigation of those harms. We can probably do a lot by applying existing regulatory frameworks rather than creating an entirely new one.
So, the first question is, as so often, what harm are we trying to prevent and the second, what outcome do we want? Fail to define these properly and regulation may create greater harms than those it is intending to prevent. As John Locke warned, “beware the danger of unintended consequences.”
Data protection legislation is an example of an international approach that has worked reasonably well. It is similar though not the same across different jurisdictions and IT companies have adapted to the different requirements to maintain their business, for instance setting up data storage in Europe to protect European citizens’ data from the U.S. Patriot Act. The economic model of social media is based on a symbiotic relationship between their services and data value. This may be one area where regulation can be developed and extended.
Similarly, existing employment legislation might be used to protect jobs or existing (if as yet undeveloped) approaches to hate speech and fake news might be extended.
Artificial Intelligence has much to offer for good as well as for harm. In my opinion, regulation needs to focus on the people who make it and the people who use it. They have moral agency; they make choices; they must take responsibility and be held accountable.
So, my contribution to the debate about regulating artificial intelligence is that instead of trying to regulate an entire international industry, we should be precise about the harms we are trying to prevent and direct existing tools towards mitigating those harms. In other words, we should regulate the outcomes of AI, not the inputs. The problem is that technological advances have always been ahead of regulators.
Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor.
MORE VOICES ARTICLES

Do chatbots understand you? Exploring bias and discrimination in AI
To what extent does AI have the potential to exhibit bias and discrimination? And how might humans implement the technology in a way that curbs these tendencies? In his latest piece for Ascend, Rick Borges discusses the ethical implications of widespread AI implementation and explores what could be done to address them.

AI requires people-centric regulation to succeed: Cayton
Artificial Intelligence has much to offer for good as well as for harm, and the need to regulate emerging AI technologies in some way has become apparent. In this article, Harry Cayton argues that instead of trying to regulate an entire international industry, AI regulation requires a precise approach that focuses on the people who create it and use it.

Fashion, identity, and regulation: Cayton explores the complex landscape of clothing norms
In liberal democratic countries, the idea of the government regulating what people should wear is abhorrent. But when it comes to choosing what we wear, are we as free as we think? In this Voices article, Harry Cayton explores the important influence of culture, society, and identity on clothing norms.

‘The people piece’: The key role of people in regulatory transformation
Regulatory transformation is not only about changing an organization’s technology and processes; it is also about empowering and engaging its people throughout the journey. In this article, Rick Borges looks at the key role that people play as enablers of regulatory transformation.

Is the shortage of health care workers a problem of supply or demand?
Shortages of health care workers have dominated headlines and strained health care systems across the globe in recent years. Harry Cayton examines what’s at the root of this dilemma in his latest Voices article.

Regulators tackle operational resilience in the UK
To mitigate the risk of major operational failures affecting the day-to-day lives of millions of financial services customers, U.K. regulators issued new rules on operational resilience that came into force in March 2022. In this article, Rick Borges looks at the requirements and the impact they will have on firms’ cyber resilience and use of third-party providers.