As consumers of financial products and services in the digital age, we now have the ability to interact with providers, such as banks, mostly through their internet banking or mobile apps. This is convenient for most of us, but sometimes we are only able to call them with an issue or a request, and some of us may not be able to use digital channels.
Nowadays, when I call my bank, a bot answers the phone to triage my call and direct me to the right information (and hopefully end the call there) or connect me to the right customer service team. The machine asks me to say the reason I am calling but often it does not understand me – I think because of my accent, as a non-native English speaker. It will ask me to repeat myself at least twice, fail again, then try to give me options to select from.
I must admit that this whole human-bot interaction makes me feel at times frustrated, slightly excluded and a bit awkward when I try to mimic what I believe to be the bot’s “expected” customer accent. When I eventually manage to speak to another human being I am understood immediately, treated with care and respect, and my issue dealt with promptly. So, is there something wrong with the machine?
Artificial intelligence powers the automatic helplines and chatbots used by banks (and others) to support their customers who will not speak to humans but instead interact with a machine. These algorithms use machine learning and natural language processing, with the bots learning from records of past conversations to come up with appropriate responses, as explained in this helpful article, “Machine Learning, Explained,” by Sarah Brown at the Massachusetts Institute of Technology Sloan School of Management. She states that “machine learning is a subfield of artificial intelligence, which is broadly defined as the capability of a machine to imitate intelligent human behavior. Artificial intelligence systems are used to perform complex tasks in a way that is similar to how humans solve problems.”
According to IBM, “Natural Language Processing (NLP) refers to the branch of computer science — and more specifically, the branch of artificial intelligence or AI — concerned with giving computers the ability to understand text and spoken words in much the same way human beings can.” If this is the case, the explanation for some chatbots not understanding me might lie elsewhere.
In a recent interview, AI ethicist and White House adviser Rumman Chowdhury stated that her “biggest fear is the problems we already see today [in society] manifesting themselves in machine learning models and predictive modelling. And early AI systems already demonstrate very clear societal harm, reflecting bias in society.” There are documented examples of bias in AI, particularly in earlier models, including racially and sexist prejudiced tools that could discriminate against specific groups in society. Chowdhury explained that “this does not go away because we make more sophisticated or smarter systems. All of this is actually rooted in the core data. And it is the same data all of these models are trained on.” In a previous role, Chowdhury and her team built the first enterprise algorithmic bias detection and mitigation tool.
Human-centered AI could be another mitigation to the issues related to algorithmic bias. MIT Sloan senior lecturer Renée Richardson Gosline explains that human-centered AI is the practice of including input from people of different backgrounds, experiences, and lifestyles in the design of AI systems. “If we truly understand how these systems can both constrain and empower humans,” Gosline says, “we will do a better job improving them and rooting out bias.” It is important to have greater transparency around the data and assumptions that feed into AI models as well as a clear understanding of who is accountable for the creation, training, and maintenance of systems.
In a 2020 paper, researchers Marco Lippi, Giuseppe Contissa, and colleagues discussed how AI could be an empowering tool for civil society, consumers, and consumer agencies created to represent and to defend consumer interests. They argue that, from a practical perspective, AI-powered tools could be employed to process large amounts of information (texts, audio-visual data, algorithms) to generate actionable knowledge that could be used by consumers. For example, tools that review large terms of service and privacy notices on websites to identify unfair clauses or areas consumers could choose to opt out. In the future, the technology could send this information to regulatory agencies in the relevant jurisdiction or, similarly, regulators could have crawlers traversing the web and analyzing each and every terms of service and privacy policy used in a given jurisdiction.
The ethical implications around the use of AI should be part of the debate in the race to increase the application of this technology in different parts of our life. Academic research such as Building Ethically Bounded AI (Rossi & Mattei, 2019) and initiatives such as the Partnership in AI, which brings together diverse voices from across the AI community, could help organizations, businesses, consumers, and society to reflect on these issues and find a safe and ethical way forward that will achieve positive outcomes.
As we all know, the benefits and opportunities brought by AI are considerable, including in the regulation of professionals, businesses, products, and services. To mention just a few examples, AI could help regulators improve their processes to become more efficient, enhance their risk management, increase their capacity to analyze large datasets to better handle complaints, detect fraud, and prevent harm to the public.
Organizations, in general, will continue to find ways to streamline their business processes by applying AI in their operations. The Bank of England and the Financial Conduct Authority found, in their second survey on the state of machine learning in the U.K. financial services last year, that “72% of firms that responded to the survey reported using or developing machine learning (ML) applications. These applications are becoming increasingly widespread across more business areas. This trend looks set to continue and firms expect the overall median number of ML applications to increase by 3.5 times over the next three years. The largest expected increase in absolute terms is in the insurance sector, followed by banking.”
Despite the breakdown in communication in my relationship with my bank’s chatbot, I am a technology enthusiast, known for loving a gadget and having Siri and Alexa as part of the family. I fully embrace the use of technology to transform how we do things and to become more efficient and effective in the use of our time. Technology can simplify and improve how organizations operate, providing opportunities for delivering timely and quality services. However, I fully support a human-centered approach to developing, improving, and delivering technology, including AI, where ethical implications and risks are considered and addressed to ensure AI systems are fair, equitable, and transparent, and that accountability can be traced.
MORE VOICES ARTICLES

Building my regulator of tomorrow with LEGO® bricks
What should the regulator of tomorrow look like? While there may be no definitive vision, contributor Rick Borges gets creative with answering this important question, drawing inspiration from a favorite toy to ‘build’ a model of an effective future regulator.

‘Thin’ and ‘thick’ rules of regulation: Cayton reviews Daston’s history of what we live by
Lorraine Daston explores fascinating examples of rulemaking throughout history in her new book, ‘Rules: A Short History of What We Live By.’ In this article, Harry Cayton discusses what regulators can learn from Daston’s work.

Regulation in financial services: Is it more effective 15 years after the global financial crisis?
The lessons learned in the aftermath of the 2008-09 global financial crisis led to changes in regulation around the world. Fifteen years after the onset of the crisis, Rick Borges reflects on the effectiveness of regulatory measures put in place to prevent a similar catastrophe from occurring in the future.

Blame games: How these nurses’ shocking crimes highlight regulatory weaknesses
The recent sentencing of British nurse Lucy Letby has left members of the public, media, and medical community calling for more regulation. In this article, Harry Cayton examines the response to Letby’s crimes and what it highlights about the limits of professional regulation.

Do chatbots understand you? Exploring bias and discrimination in AI
To what extent does AI have the potential to exhibit bias and discrimination? And how might humans implement the technology in a way that curbs these tendencies? In his latest piece for Ascend, Rick Borges discusses the ethical implications of widespread AI implementation and explores what could be done to address them.

AI requires people-centric regulation to succeed: Cayton
Artificial Intelligence has much to offer for good as well as for harm, and the need to regulate emerging AI technologies in some way has become apparent. In this article, Harry Cayton argues that instead of trying to regulate an entire international industry, AI regulation requires a precise approach that focuses on the people who create it and use it.