Digital Transformation, Financial Stability, Opinion, Risk Management

Generative AI: An opportunity or a ‘grey rhino’ lurking?

By Gary Lynam
Image: Getty Images

It’s often easy to be wise after a calamity with the benefit of hindsight uncovering hidden issues and seemingly unforeseeable shortcomings. The demise of Credit Suisse was a classic, avoidable “grey rhino” event: a highly probable event with the potential to have a high impact.

Investigations concluded that most of the contributing factors were predictable, but ignored until it was too late, and there had been plenty of other red flags along the way.

Banks should heed the warnings signs of a grey rhino and put risk management procedures in place to avert the next potential grey rhino: generative artificial intelligence.

A disconnect between perception and reality

Although, on the face of it, Credit Suisse had the appropriate risk management policies and procedures in place, senior management allowed a growing disconnect between its perceived attitude to risk and actual conduct. As a result, Credit Suisse operated very differently to its stated cultural and policy objectives. In fact, bonus structures actively encouraged the pursuit of high-risk commercial gains without facing the consequences.

This case highlighted serious concerns about attitudes to corporate risk management procedures within financial services, and fuelled speculation about what other likely grey rhinos are lurking for firms that don’t have either the right measures in place or pay proper attention to them.

Gen AI: a grey rhino in waiting

A basic premise of every organisation’s risk management strategy should be to prevent grey rhino scenarios from unfolding. So, with Gen AI transforming the way financial companies are managing their key business functions, should it also be considered a grey rhino lying in wait?

A Bank of England survey published in October 2022 found that 72% of firms surveyed are using or developing AI applications, with the trend expected to triple in the next three years. AI applications are now more advanced and embedded in day-to-day operations, with nearly eight out of 10 firms in the later stages of development.

Banks looking to adopt AI must meet new regulations that are currently being drafted, including the EU’s AI Act and the UK’s Data Protection and Digital Information Bill. While the UK’s final bill was not radically different from what was first proposed last year, the new version seeks to soften some of the definitions linked to personal data and reduced compliance burdens by limiting certain record-keeping requirements.

More significantly, with the UK government plotting to become the AI hub of the future with a pro-innovation regulatory framework, one wonders if there is a degree of risk acceptance here. In order to prosper, UK regulators are prepared to be tested from time to time as organisations adopt AI and machine learning models with some flexibility afforded.

However, how AI is deployed relating to the UK’s new Consumer Duty has the potential to become a grey rhino. With its focus on delivering positive outcomes for customers, the use of AI to make decisions based on historic data could already be storing up problems for the future. For organisations that don’t adhere to the rules, the Financial Conduct Authority promises to impose severe financial penalties including where it finds evidence of harm or even mere risk of harm to customers.

While AI tools offer a massive opportunity to automate and speed up time-consuming tasks such as the analysis and processing of customer data, the results can only be as good as the source data. If that information is incomplete, inaccurate, or biased in any way, it will affect the algorithms that make critical decisions, which may risk harming customers or not delivering positive outcomes.

Bias and unintended outcomes from AI have been on the regulatory radar for some time. As far back as 2021, the UK’s Competition and Markets Authority released a report on the use of algorithms that drew attention to a range of harms. It gave many examples of behaviour that could be considered unfair or harmful to consumers, such as pricing products based on what the algorithm thinks individual consumers will pay, rather than the value of the product. Others are theoretical but plausible, such is the potential for multiple algorithms in competing firms to react to each other in such a way that constitutes price fixing.

Stronger risk management leadership 

To guard against making inaccurate or biased decisions that impact consumers, organisations need to develop robust risk management policies to govern how data is processed by AI tools. Ensuring that rigorous validation, testing, and audit processes are in place along with continuous monitoring is vital. Firms must also establish mechanisms for addressing consumer grievances and demonstrate transparency of AI processes and the outcomes they deliver.

Without stronger risk management leadership to put these policies in place and abide by them, the financial industry is likely to encounter more grey rhino events as the AI revolution takes hold and accompanying compliance regulations are enforced.

 

 

 

 

 

 

 

Gary Lynam, director of enterprise risk management advisory at Protecht  

Read Next:

Bruce Fletcher
Digital & Resilience, Research, Risk Management
July 19, 2023

Ask AI: ChatGPT scores 69% on risk management quiz

Bruce Fletcher, chairman of the Professional Risk Managers’ International Association, finishes his interrogation of ChatGPT – giving it an overall score of 69% – in part two of a Banking...
Read more