How to combat fraud in Banking
Fraud is a growing problem across many industries and banking is certainly no exception. Whether internal or external, all banking institutions should be aware of the potential risks as well as sources of fraud within their organizations. That being said, with the innovation of new technology such as a robust and governed AI platform and in-memory computing, banks can reduce the risk of fraud. Those that take the first step and adopt these technologies will have a considerable advantage over the competition.
Fraud: how big of an issue is this really?
This year, PwC surveyed 5,000 companies and found that nearly half experienced fraud within the past 24 months. To make matters worse, only 56% of these companies investigated their worst incident, and barely one-third reported the incident to their board.
For banks in particular, fraud can be seen in many different forms. According to the latest European Central Bank Report, the total value of fraudulent transactions through credit cards alone amounts to €1.8 billion annually.
The most common cases of fraud in banking fall under customer fraud, asset fraud, or accounting/financial statement fraud. Just within the last few years, we have seen several Scandanavian banks fall into turmoil and media frenzy due to internal and external fraud that could have been avoided with the proper risk management tools in place.
Figure 1: Types of Fraud in the Banking Industry
Banks are clearly aware that fraud is an ongoing and growing issue. Yet, too many fail to properly address the systematic origins of fraud, leading to a build up of cases, which could lead to significant losses. Others are using traditional, namely static or rule-based approaches to identify and flag fraudulent transactions for further manual investigation.
With traditional methods, identifying real fraudulent transactions becomes a manual task performed by the forensic units within financial institutions, which is costly and has a low success rate. These methods are not precise enough, and flag too many false positives to be reliable. A high number of false positives then has a negative impact on customer satisfaction, ultimately leading to customer churn.
Artificial Intelligence is the answer
With AI and machine learning techniques, banks can use data to predict and prevent future instances of fraudulent activity. In fact, the IHS Markit predicts that the business value of AI in the banking industry will sky-rocket to $300 billion by 2030.
However, in recent years, several companies have shown concern and doubt in deploying such technology. Less than 3 in 10 companies strongly agree that they’ve been able to implement and upgrade technology due to cost, limited resources, and lack of systems. Only 25% of institutions are using AI, and 40% of them struggle to find value.
What seems to be the problem? According to these same studies, the organizations struggling to find value also struggle with implementing their AI properly.
But how should they be implementing AI?
The challenges of successfully bringing AI models to production
Figure 2: 2021.ai Grace AI Platform
A majority of banks are at the start of their AI journey and have many factors to sort out before moving forward into production. Many of these organizations have fallen into the false impression that AI is too difficult to implement and requires a new set of employees to get things off the ground and running.
When implementing AI into the banking industry, banks tend to struggle in these specific areas:
- Organization: Leadership & Hiring
- Model risk and life-cycle management
- Model Documentation
Leadership and hiring
When it comes to hiring and organizational change, banks should realize that they have several C-Level employees in various departments already in place that should step up to the plate and handle a different piece of the responsibility regarding their organization’s AI and AI governance charter. These roles must work together in monitoring a model’s life-cycle. To learn more about the different roles and their responsibilities, please download our AI Governance handbook.
Regulations, model risk management and model documentation
As for regulations and risk management, these are areas where banks cannot afford to skimp out. For banks to be successful with AI, they must follow the rules and regulations concerning the collection of personal data. In addition, it is also important to ensure that all processes and model predictions can be clearly explained, analyzed, and validated.
Aspects like Fairness, Explainability, Accountability, and Trustworthiness (FEAT) are becoming increasingly important for all stakeholders in the financial sector. Regulators will be monitoring and checking in on these variables, and we can expect them to tighten their grip in the coming years.
New regulations for banks
In fact, new model risk management requirements have already been implemented in several jurisdictions. In the US, SR 11-7 is a framework for guidance on model risk management by the Federal Reserve for financial institutions. In Europe, the ECB launched a project for the Targeted Review of Internal Models (TRIM) for banks to reduce their inconsistencies for calculations on risk-weighted assets.
Although regulators and regulations may present obstacles when working with data and AI, they should not hinder you completely. These regulations simply drive the need for internal AI governance capabilities required to address a need for increased model transparency. If you have an AI governance charter in place with functions that ensure compliance and model documentation in place, you are in the clear to work with and gain value from data and AI.
It has been shown that AI can successfully help detect and prevent fraud in banking. However, fighting fraud through AI requires a strategy for transitioning from static and rule-based models towards learning models. In addition, your IT department needs the tools and infrastructure to cater to these learning models. This can be addressed with a common AI platform that supports collaboration for data scientists, business stakeholders, developers, compliance officers, and IT operations.
About the author
Senior Advisor, 2021.AI
Christian is a passionate Senior Advisor with 20 years of experience within the Fintech market. He has a proven track record in Fintech, Credit & Market Risk, Compliance, Project, Program, and Portfolio Management. Previously, he worked as a Director at Saxo Bank and spearheaded the Global Enterprise Risk initiative.
You might also like…
The Responsible Tech Newsletter
Get the latest know-how from those in the know. Sign up for our Responsible Tech Newsletter and receive the latest insights from our experts.