Do chatbots understand you? Exploring bias and discrimination in AI
Ascend-0918-RBs-article-Sept-1600px-100
To what extent does AI have the potential to exhibit bias and discrimination? And how might humans implement the technology in a way that curbs these tendencies? In his latest piece for Ascend, Rick Borges discusses the ethical implications of widespread AI implementation and explores what could be done to address them.

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RELATED TOPICS

Thentia is a highly configurable, end-to-end regulatory and licensing solution designed exclusively for regulators, by regulators.

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

RECOMMENDED FOR YOU

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook

As consumers of financial products and services in the digital age, we now have the ability to interact with providers, such as banks, mostly through their internet banking or mobile apps. This is convenient for most of us, but sometimes we are only able to call them with an issue or a request, and some of us may not be able to use digital channels.

Nowadays, when I call my bank, a bot answers the phone to triage my call and direct me to the right information (and hopefully end the call there) or connect me to the right customer service team. The machine asks me to say the reason I am calling but often it does not understand me – I think because of my accent, as a non-native English speaker. It will ask me to repeat myself at least twice, fail again, then try to give me options to select from.

I must admit that this whole human-bot interaction makes me feel at times frustrated, slightly excluded and a bit awkward when I try to mimic what I believe to be the bot’s “expected” customer accent. When I eventually manage to speak to another human being I am understood immediately, treated with care and respect, and my issue dealt with promptly. So, is there something wrong with the machine?

Artificial intelligence powers the automatic helplines and chatbots used by banks (and others) to support their customers who will not speak to humans but instead interact with a machine. These algorithms use machine learning and natural language processing, with the bots learning from records of past conversations to come up with appropriate responses, as explained in this helpful article, “Machine Learning, Explained,” by Sarah Brown at the Massachusetts Institute of Technology Sloan School of Management. She states that “machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.”

According to IBM, “Natural Language Processing (NLP) refers to the branch of computer science — and more specifically, the branch of artificial intelligence or AI — concerned with giving computers the ability to understand text and spoken words in much the same way human beings can.” If this is the case, the explanation for some chatbots not understanding me might lie elsewhere.

In a recent interview, AI ethicist and White House adviser Rumman Chowdhury stated that her “biggest fear is the problems we already see today [in society] manifesting themselves in machine learning models and predictive modelling. And early AI systems already demonstrate very clear societal harm, reflecting bias in society.” There are documented examples of bias in AI, particularly in earlier models, including racially and sexist prejudiced tools that could discriminate against specific groups in society. Chowdhury explained that “this does not go away because we make more sophisticated or smarter systems. All of this is actually rooted in the core data. And it is the same data all of these models are trained on.” In a previous role, Chowdhury and her team built the first enterprise algorithmic bias detection and mitigation tool.

Human-centered AI could be another mitigation to the issues related to algorithmic bias. MIT Sloan senior lecturer Renée Richardson Gosline explains that human-centered AI is the practice of including input from people of different backgrounds, experiences, and lifestyles in the design of AI systems. “If we truly understand how these systems can both constrain and empower humans,” Gosline says, “we will do a better job improving them and rooting out bias.” It is important to have greater transparency around the data and assumptions that feed into AI models as well as a clear understanding of who is accountable for the creation, training, and maintenance of systems.

In a 2020 paper, researchers Marco Lippi, Giuseppe Contissa, and colleagues discussed how AI could be an empowering tool for civil society, consumers, and consumer agencies created to represent and to defend consumer interests. They argue that, from a practical perspective, AI-powered tools could be employed to process large amounts of information (texts, audio-visual data, algorithms) to generate actionable knowledge that could be used by consumers. For example, tools that review large terms of service and privacy notices on websites to identify unfair clauses or areas consumers could choose to opt out. In the future, the technology could send this information to regulatory agencies in the relevant jurisdiction or, similarly, regulators could have crawlers traversing the web and analyzing each and every terms of service and privacy policy used in a given jurisdiction.

The ethical implications around the use of AI should be part of the debate in the race to increase the application of this technology in different parts of our life. Academic research such as Building Ethically Bounded AI (Rossi & Mattei, 2019) and initiatives such as the Partnership in AI, which brings together diverse voices from across the AI community, could help organizations, businesses, consumers, and society to reflect on these issues and find a safe and ethical way forward that will achieve positive outcomes.

As we all know, the benefits and opportunities brought by AI are considerable, including in the regulation of professionals, businesses, products, and services. To mention just a few examples, AI could help regulators improve their processes to become more efficient, enhance their risk management, increase their capacity to analyze large datasets to better handle complaints, detect fraud, and prevent harm to the public.

Organizations, in general, will continue to find ways to streamline their business processes by applying AI in their operations. The Bank of England and the Financial Conduct Authority found, in their second survey on the state of machine learning in the U.K. financial services last year, that “72% of firms that responded to the survey reported using or developing machine learning (ML) applications. These applications are becoming increasingly widespread across more business areas. This trend looks set to continue and firms expect the overall median number of ML applications to increase by 3.5 times over the next three years. The largest expected increase in absolute terms is in the insurance sector, followed by banking.”

Despite the breakdown in communication in my relationship with my bank’s chatbot, I am a technology enthusiast, known for loving a gadget and having Siri and Alexa as part of the family. I fully embrace the use of technology to transform how we do things and to become more efficient and effective in the use of our time. Technology can simplify and improve how organizations operate, providing opportunities for delivering timely and quality services. However, I fully support a human-centered approach to developing, improving, and delivering technology, including AI, where ethical implications and risks are considered and addressed to ensure AI systems are fair, equitable, and transparent, and that accountability can be traced.  

MORE VOICES ARTICLES

Trust on trial: Navigating the murky waters of scientific integrity

As fraudulent research papers flood academic journals, the sanctity of scientific discovery is under siege, challenging the very foundation of trust we place in peer-reviewed publications. With AI now both a tool for creating and detecting such deceptions, the urgency for a robust, independent regulatory framework in scientific publishing has never been greater.

Read More »

Do regulators deserve deference? 

In a pivotal moment for regulatory law, the U.S. Supreme Court’s review of the Chevron doctrine could redefine the bounds of deference courts give to regulatory agencies, potentially inviting more challenges to their authority. This critical examination strikes at the heart of longstanding legal principles, signaling a significant shift in the landscape of regulatory oversight and its interpretation by the judiciary.

Read More »
Harry Cayton accountability in AI article

From Frankenstein to Siri: Accountability in the era of automation

As AI advances in sectors from health care to engineering, who will be held accountable if it causes harm? And as human decision-makers are replaced by algorithms in more situations, what will happen to uniquely human variables like empathy and compassion? Harry Cayton explores these questions in his latest article.

Read More »
Regulating joy

Regulating joy: The risky business of festivities

In his final Voices article of 2023, Harry Cayton reflects on our enthusiasm for participating in cultural festivities that often cause injuries or even deaths, which has led some governments to attempt to regulate these risky celebrations.

Read More »
Regulator of tomorrow

Building my regulator of tomorrow with LEGO® bricks

What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

Read More »

Stay informed.

Get the latest news and views on regulation and digital government.

SHARE

Share on linkedin
Share on twitter
Share on email
Share on facebook
Rick Borges
Written byRick Borges
Rick writes on regulation and related topics in financial services. With his extensive experience spanning the financial services and health care sectors, he acted as an advisor on professional standards and regulation to organizations in the U.K. and internationally.