Banks divided over powerful, yet problematic, AI tools

Generative artificial intelligence is taking off at some banks but integrating it throws up challenges over its accuracy and regulatory compliance.
Japan’s third largest bank, Mizuho Bank, rolled out Microsoft-backed OpenAI to its entire 45,000-strong Japanese workforce in June. Mizuho is actively encouraging staff in its core lending units to test out OpenAI tools like ChatGPT to see where they can put the technology to good use.
AI is already used in the banking sector; generative AI – technology that can produce content – is the next frontier. The 60 largest North American and European banks employ 46,000 people in AI development, data engineering and governance and ethics roles, with as many as 100,000 global banking roles involved in bringing it to market, according to Alexandra Mousavizadeh, CEO and co-founder of Evident, an independent intelligence platform that aims to bring transparency to the adoption of AI in business.
She says: “Two in five of these AI employees have started their current roles since January 2022, demonstrating the pace at which new AI-related roles are being created.”
How compliance teams use artificial intelligence
Alvin Tan, principal consultant at technology and management consultancy Capco, says: “The application of AI is especially beneficial either where existing control processes are highly manual and involve a significant amount of unstructured [and] textual information processing, or where the compliance domain itself involves a lot of unstructured information.”
Key applications include guidance on compliance and decision making, such as recommendations around regulation or internal policy. AI is already used in some compliance systems, particularly anti-money laundering, where it can analyse customers’ transaction behaviour to make future predictions.
This system becomes sensitive to changes in behaviour, no matter how subtle, and can flag suspicious changes in behaviour that traditional anti-money laundering systems might miss, says Nick Henderson-Mayo, director of learning and content at compliance e-learning and software provider, VinciWorks.
He says: “AI can also enhance the customer due diligence and know your customer processes, allowing both of these to be carried out faster – and in more detail.”
Looking beyond compliance, well-trained AI models can also be used to support a variety of functions within financial services. “A tailored model can be used to assist finance teams in auditing financial figures, by detecting inconsistencies in balance sheets or creating tax reports. In addition, AI can be used to make decisions that align with previous manual analysis, so could prove invaluable for AML activities, information verification and underwriting,” says Rav Hayer, managing director at management consultant Alvarez & Marsal’s digital practice.
The use cases and flaws of generative AI
Generative AI can create detailed summaries of customer conversations for inclusion in a bank’s compliance system. Financial services institutions are training AI models, underpinned by multiple fundamental premises.
John Goodale, executive director and head of Europe at business processing outsourcing specialist Ubiquity, says: “From a compliance perspective, the most useful generative AI application right now is conversation intelligence – extracting insights from conversations to aid compliance monitoring.”
Generative AI can also be used to monitor transactions in real time, to identify potential compliance violations to help banks prevent fraud and other financial crimes, says Filip Verloy, field chief technology officer at API security platform, Noname Security.
But he warns: “Generative AI can generate biased answers, especially if they are trained on data that is biased. This could lead to discrimination or other forms of unfair treatment.”
The pace of development of generative AI is expected to accelerate, but the technology is riddled with issues such as “hallucinations” – incorrect results that don’t make sense or are even made up. New York lawyer Steven A. Schwartz was pulled up in front of a judge in June after he used ChatGPT to submit a legal brief that cited several fictitious court cases.
Goodale says: “For today’s banks and fintechs, the hallucinations, incomplete data, potential bias and the need to manage regulatory and consumer expectations mean AI needs human oversight.”
Some banks remain cautious
US banks are wary; JPMorgan, Citigroup, Goldman Sachs, Bank of America, Deutsche Bank and Wells Fargo have reportedly barred ChatGPT.
One barrier that still hinders generative AI is the limitation on available information, in particular the extent to which beneficial ownership registers are non-public, or where ownership structures are opaque. The tool itself has limited knowledge of world and events after 2021.
“AI can only work with information that is available and accessible on the internet and which it is permitted to use,” says Alice Kemp, a senior associate and employed barrister at international law firm RPC. “Put simply, if the information is not there, the answer won’t be found.”
However, she says there are high hopes that ChatGPT will be able to assist with cross jurisdictional and cross language information matching, which is currently a challenge for most compliance professionals.
Implementing Generative AI into compliance processes still faces a serious challenge in the availability and quality of data. While AI algorithms excel at analysing vast datasets, their effectiveness relies on data quality and diversity.
Keith Berry, general manager of Moody’s Analytics KYC Solutions says: “Insufficient or incomplete data can lead to biased and inaccurate results, compromising AI-led AML efforts. Financial institutions can overcome this by collaborating with external data providers, industry networks, and regulators to enrich the AI data ecosystem.”
Read Next:
