Is AI a good thing or a bad thing? There seems to be a lot of debate at the moment. But perhaps that is a stupid question, as no ethical issues are quite so simple and binary. If there was a right or wrong answer, we wouldn’t need to discuss it. So, I decided to ask ChatGPT the question, “Is AI a good thing or a bad thing?” This is what it replied:
“As an AI language model, I don’t have personal opinions or beliefs. However, AI technology has the potential to do a lot of good, such as improving medical care, increasing efficiency and accuracy in industries, and facilitating scientific research. At the same time, like all technologies, AI can be misused, and there are concerns about things like bias in algorithms, job displacement, and privacy and security issues. Ultimately, it is up to human beings to use AI responsibly and to ensure that its benefits are shared fairly.”
It is a splendidly balanced, but one might say evasive, reply. It begins by saying that “I don’t have personal opinions or beliefs,” then goes on to give its opinion that “AI technology has a lot of potential to do good” while at the same time “it can be misused”. (We’ll come back to the use of ‘I’ and ‘personal’ later.)
Clearly it does have an opinion; its opinion is that AI has the potential to do a lot of good and that it may be misused. That is so banal as to be meaningless. It could have been said about every new technology from the invention of the printing press in the 15th century, to the Jacquard Loom in 1801, to the television or mobile phone. So, I asked ChatGPT another question: “Why is AI a good thing?” Its answer was much as before: “AI has the potential to greatly enhance our productivity, efficiency and overall quality of life.”
When I told it that I disagreed its rejoinder was that “Overall my purpose as an AI assistant is to be helpful, reliable and ethical and my developers take adequate measures to ensure that I operate in a fair and responsible manner.”
I must admit I am impressed by its ability to engage with sequential questions as though it was having “a constructive discussion” (its phrase), but it is not really a conversation because each answer is independent of the previous one. Currently, it doesn’t engage with the questioner’s thought process, only with its own database – one question or comment at a time.
Of course, chatbots are only a subset of AI and ChatGPT only one version of a chatbot. It is certainly a great improvement on the bot which fails to answer my questions about my bank account and is fronted by a cartoon face called Ruby the Robot. ChatGPT will be right in its prediction, I expect, that AI will support improvements to health care and research and will increase efficiency in some transactions while replacing some people’s jobs. What It didn’t tell me is that it will also provide information which is incorrect, incomplete, or misleading and that human brains will have to continue to make judgments as to its value.
It is the use of ‘I’ in ChatGPT’s answers that bothers me. It is not a person. It tells me it has “no personal opinions or beliefs.” Correct. But it claims a personal identity in the use of ‘I’ and its designers have deliberately chosen to imply by the use of ‘I’ that it is an intentional, unique entity rather than a massive interactive database. That illusion of individuality rather contradicts, to my mind, ChatGPT’s statement that its “developers take adequate measures to make sure I (sic) operate in a fair and responsible manner.” It is not fair and responsible to mislead the user into thinking it is engaging with a thinking individual rather than a lifeless machine.
In the end, we must agree with ChatGPT (which claims to have no opinions for us to agree with) that “It is up to human beings to use AI responsibly and ensure the benefits are shared fairly.” This sounds like the amorality of the technician; I just make this stuff. It’s up to you how you use it.
That dilemma was why I asked the question. It requires a very thoughtful answer – which I didn’t get. I didn’t get it because humans don’t know the answer and the chatbot only regurgitates what human beings have told it. Is AI a good thing or a bad thing? I still don’t know. Do you?
Harry Cayton is a sought-after global authority on regulatory practices who created the Professional Standards Authority (PSA) and pioneered right-touch regulation. He is a regular Ascend Voices contributor.
MORE VOICES ARTICLES

Cayton asks ChatGPT: Is AI a good thing or a bad thing?
Is AI a good thing or a bad thing? It’s a question that’s currently top of mind for lawmakers and citizens alike as the development of increasingly powerful AI technologies continues at a rapid pace. As Harry Cayton recently contemplated this question, he decided to ask ChatGPT to see what it had to say.

Dispatches from Dublin: How knowledge, innovation, and creativity empower regulators
CLEAR’s recent International Congress in Dublin, Ireland took an in-depth look at common challenges dominating the world of modern regulation. Paul Leavoy shares his insights from the conference, which explored issues like improving continuing education efforts, how marketing tactics can bolster regulatory efforts, and why regulators should start thinking of regulation as a verb instead of a noun.

Recruitment of international professionals should engender regulatory cooperation, not competition: Cayton
Facing dire labor shortages in various health professions, legislatures are encouraging, cajoling, or ordering regulators to speed up and simplify the licensing of international professionals. But as Harry Cayton explores in his latest Voices column, there are often unintended consequences to this migration.

How boards, councils, and committees unwittingly employ the Simple Sabotage Field Manual
Written during World War II, the Simple Sabotage Field Manual instructed civilians living in Nazi-occupied territories on how to disrupt organizations by purposefully sabotaging productivity and progress. In this Voices column, Harry Cayton explores the striking similarities between the manual’s advice on how to sabotage meetings and behaviors commonly seen at regulatory board and committee meetings today.

Reducing barriers to licensing, regulatory resilience during COVID, and more: A conversation with Ronne Hines (Part 2)
In Part 2 of Paul Leavoy’s conversation with Ronne Hines, formerly from Colorado’s Department of Regulatory Agencies (DORA), they discuss the role regulatory agencies can play in alleviating labor shortages, digital transformation and regulatory resilience during the COVID-19 pandemic, regulatory reform and challenges, and more.

Regulatory centralization in Colorado and fostering collaboration through CLEAR: A conversation with Ronne Hines (Part 1)
In Part 1 of Paul Leavoy’s conversation with Ronne Hines, formerly from Colorado’s Department of Regulatory Agencies (DORA), they discuss DORA’s mandate, the benefits and challenges of a centralized regulatory model, how CLEAR fosters inter-agency collaboration, and more.