The free-market economist Milton Friedman said, “We know who benefits from regulation when we see who argues for it.” This came to my mind when I read in a July issue of Ascend’s weekly regulatory news that tech companies are pushing for self-regulation. Of course they are.
The phenomenon of regulatory capture is well known. It means that the industry being regulated infiltrates their regulator and influences it to act in their interest rather than in the public’s interests. Industry and professions have the motivation, resources, and expertise to take over their regulator.
The public has none of those things. We see regulatory capture everywhere, particularly in those professional regulators which are still also trade associations, or where public utilities have been turned into private monopolies, such as in the U.K. water companies. Those companies have been drained by their shareholders (including Canadian public sector pension funds) of the reliable flow of monopoly income while the flow of clean water has become increasingly less reliable. The new chief executive of the largest company, Thames Water, is the former chief executive of the water regulator, OFWAT.
The tech companies, it seems, are aiming to get ahead of the game. With their pre-emptive strike, they show they don’t merely want to capture their future regulator –they want to own it.
All around the world we see jurisdictions struggling with how to regulate digital media in general and AI in particular. The U.K. government has already spent nearly seven years trying to get an immensely complicated and contested online safety bill through parliament. Percipiently, Friedman also observed “the Internet is going to be one of the major forces for reducing the role of government.”
Friedman died in 2006, so he never saw the growth of social media, but he was right in his recognition that governments would struggle to control this boundaryless, leaderless, accessible, diverse, amoral space and that people would embrace it enthusiastically for good or ill.
Global tech companies have economic power greater than that of many countries, and being multi-jurisdictional, they don’t need to care very much about regulation in the individual countries that contribute little to their income. Attempts by the U.K. government to break end-to-end encryption of messaging have been met by WhatsApp, Apple, and others with threats to close their U.K. accounts. Similarly, Canada’s law forcing Google and Facebook to pay providers for news has resulted in both saying they will stop news services, as indeed happened in Australia until a compromise was reached.
When governments start to take on the regulation of AI, chatbots, and self-managing robots they will face unprecedented challenge and unprecedented opposition. I don’t think we are even clear about who or what or where to regulate or for what purpose. Are we regulating to protect jobs, to safeguard the truth, to prevent crime, or to protect political systems and local economies? Tying down the giant that AI will grow into is no more likely to be successful than the Lilliputians were in tying down Gulliver.
It seems to me that instead of trying to regulate AI, as though AI was a thing, we should identify the specific risks of harm that AI creates and target our regulatory approaches to the prevention or mitigation of those harms. We can probably do a lot by applying existing regulatory frameworks rather than creating an entirely new one.
So, the first question is, as so often, what harm are we trying to prevent and the second, what outcome do we want? Fail to define these properly and regulation may create greater harms than those it is intending to prevent. As John Locke warned, “beware the danger of unintended consequences.”
Data protection legislation is an example of an international approach that has worked reasonably well. It is similar though not the same across different jurisdictions and IT companies have adapted to the different requirements to maintain their business, for instance setting up data storage in Europe to protect European citizens’ data from the U.S. Patriot Act. The economic model of social media is based on a symbiotic relationship between their services and data value. This may be one area where regulation can be developed and extended.
Similarly, existing employment legislation might be used to protect jobs or existing (if as yet undeveloped) approaches to hate speech and fake news might be extended.
Artificial Intelligence has much to offer for good as well as for harm. In my opinion, regulation needs to focus on the people who make it and the people who use it. They have moral agency; they make choices; they must take responsibility and be held accountable.
So, my contribution to the debate about regulating artificial intelligence is that instead of trying to regulate an entire international industry, we should be precise about the harms we are trying to prevent and direct existing tools towards mitigating those harms. In other words, we should regulate the outcomes of AI, not the inputs. The problem is that technological advances have always been ahead of regulators.
Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor.
MORE VOICES ARTICLES

Building my regulator of tomorrow with LEGO® bricks
What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

‘Thin’ and ‘thick’ rules of regulation: Cayton reviews Daston’s history of what we live by
Lorraine Daston explores fascinating examples of rulemaking throughout history in her new book, ‘Rules: A Short History of What We Live By.’ In this article, Harry Cayton discusses what regulators can learn from Daston’s work.

Regulation in financial services: Is it more effective 15 years after the global financial crisis?
The lessons learned in the aftermath of the 2008-09 global financial crisis led to changes in regulation around the world. Fifteen years after the onset of the crisis, Rick Borges reflects on the effectiveness of regulatory measures put in place to prevent a similar catastrophe from occurring in the future.

Blame games: How these nurses’ shocking crimes highlight regulatory weaknesses
The recent sentencing of British nurse Lucy Letby has left members of the public, media, and medical community calling for more regulation. In this article, Harry Cayton examines the response to Letby’s crimes and what it highlights about the limits of professional regulation.

Do chatbots understand you? Exploring bias and discrimination in AI
To what extent does AI have the potential to exhibit bias and discrimination? And how might humans implement the technology in a way that curbs these tendencies? In his latest piece for Ascend, Rick Borges discusses the ethical implications of widespread AI implementation and explores what could be done to address them.

AI requires people-centric regulation to succeed: Cayton
Artificial Intelligence has much to offer for good as well as for harm, and the need to regulate emerging AI technologies in some way has become apparent. In this article, Harry Cayton argues that instead of trying to regulate an entire international industry, AI regulation requires a precise approach that focuses on the people who create it and use it.