From Frankenstein to Siri: Accountability in the era of automation
Harry Cayton accountability in AI article
As AI advances in sectors from health care to engineering, who will be held accountable if it causes harm? And as human decision-makers are replaced by algorithms in more situations, what will happen to uniquely human variables like empathy and compassion? Harry Cayton explores these questions in his latest article.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

As artificial intelligence slowly takes over many regulatory processes – starting perhaps with registration, identity, and qualification checks, then moving on to analysis of complaints and discipline data – we should see great improvements in efficiency and possibly cost reduction.

At the same time, AI will also be moving further into health care than it has already done: reading digital scans more accurately than humans ever can, replacing surgeons with computer-controlled robots, and eliminating face-to-face consultations for diagnosis. In engineering, architecture, and legal services, AI is already playing significant (if mainly analytical) roles.

It is not fanciful to imagine a future regulatory complaint about an error by a robot or software program, but who will be accountable if AI causes harm? The person who wrote the software, or the hardware that it ran on, or the organization which used it, or the human who switched it on? I can only imagine the complexity of determining responsibility in such a situation. One thing we can be sure of is that AI will not be responsible because AI is not an entity – at least not yet.

Despite the misleading attempts by those who promote AI assistants to us – by giving it a name like Alexa, Siri, or Claude and by using ”I” when it answers, as though it was a person – AI is the product of a machine, not a person. Despite a name, it has no self. Up to now we have been able to hold responsible the humans in charge of a machine for a failure that causes harm, but if the machine runs itself, acts by itself, and thinks for itself, who is the responsible person?

As AI moves deeper into more and more areas of activity, displacing humans as it goes, it will no doubt eliminate most errors, but what will happen to judgment, empathy, and compassion, those human variables which are interpretative, intuitive, and instinctive rather than rational?

There is much discussion at the moment about “compassionate regulation,” or as Zubin Austin simply puts it, ”kindness.” The contrast here is between artificial intelligence and natural intelligence. The latter includes emotion, insight, and imagination. Without those elements AI will never replicate the way we humans make our judgments and, no doubt, our mistakes.

AI of course does learn, that’s why it is so powerful. Could it learn to replicate feelings? It won’t experience those feelings, but it may learn to take into its algorithms the kinds of feelings that a human decision-maker might have. We know that AI is already capable of something that looks like imagination because it is able to invent fake references for articles it writes for students cheating on their exams. I don’t think it is yet able to express compassion toward the person harmed as well as the person who caused the harm, as a human decision-maker can.

I think humans will be needed for a long time to do the things we humans do: act irrationally, emotionally, with kindness towards others and, sometimes consciously, make mistakes because of a greater good.

There is nothing new about my musings here on this relationship between human and machine. Over 200 years ago, in 1818, Mary Shelley published “Frankenstein.” In her novel, Victor Frankenstein builds a creature in his laboratory based on a new and previously unknown science. The monster he creates is huge, powerful, and (unlike AI) has emotions. The monster attempts to fit into human society but is shunned, which leads it to seek revenge against its creator. In another contrast with AI, the monster has no name. Shelley denies it human identity.

We have humanized AI with a name but no emotions. That contrasts with Shelley’s monster, which had emotions but no name. In her novel it is emotion – the monster’s desire to relate to humans, to be valued – and its rejection by humans which causes the monster to turn against its creator and to destroy him.

I wonder if the real step forward, or maybe backwards, will be if the developers of AI not only create machines under human control that can destroy humans, as military drones are doing in Ukraine and Gaza right now, but create machines that can express pleasure or horror at what they are doing.

If AI ever has feelings, maybe it will be able to practice compassionate regulation on our behalf, but maybe it will prefer to make its own choices. It was the very fact that Frankenstein’s monster had feelings that ultimately made it uncontrollable.

That then, was fiction. This now, is not.

Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor. 

MORE VOICES ARTICLES

Trust on trial: Navigating the murky waters of scientific integrity

As fraudulent research papers flood academic journals, the sanctity of scientific discovery is under siege, challenging the very foundation of trust we place in peer-reviewed publications. With AI now both a tool for creating and detecting such deceptions, the urgency for a robust, independent regulatory framework in scientific publishing has never been greater.

Read More »

Do regulators deserve deference? 

In a pivotal moment for regulatory law, the U.S. Supreme Court’s review of the Chevron doctrine could redefine the bounds of deference courts give to regulatory agencies, potentially inviting more challenges to their authority. This critical examination strikes at the heart of longstanding legal principles, signaling a significant shift in the landscape of regulatory oversight and its interpretation by the judiciary.

Read More »
Harry Cayton accountability in AI article

From Frankenstein to Siri: Accountability in the era of automation

As AI advances in sectors from health care to engineering, who will be held accountable if it causes harm? And as human decision-makers are replaced by algorithms in more situations, what will happen to uniquely human variables like empathy and compassion? Harry Cayton explores these questions in his latest article.

Read More »
Regulating joy

Regulating joy: The risky business of festivities

In his final Voices article of 2023, Harry Cayton reflects on our enthusiasm for participating in cultural festivities that often cause injuries or even deaths, which has led some governments to attempt to regulate these risky celebrations.

Read More »
Regulator of tomorrow

Building my regulator of tomorrow with LEGO® bricks

What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

Read More »

Stay informed.

Get the latest news and views on regulation and digital government.

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Harry Cayton
Written byHarry Cayton
Harry Cayton is a sought-after global authority on regulatory practices who created the PSA and pioneered right-touch regulation. He is a regular Ascend contributor.