Combining human expertise and machine performance to create a more effective anti-fraud solution is one key challenge in applying AI to banking fraud. Another which is just as important is to enable banks to share information in a secure and compliant way so they can benefit from each other’s experience in detecting and preventing frauds.
Historically, banks have been extremely reluctant to share data due to concerns over competition law, client confidentiality and liability. However, there are ways to overcome these difficulties, using “Collective AI.” NetGuardians has effectively created a consortium of organizations that use their AI-based fraud solution and can therefore take advantage of a network effect – each institution that implements the solution benefits from the insights of all other users.
Collective AI shares statistics on legitimate transactions across the banks that are part of the consortium. Confidentiality is maintained because only statistics on transactions are shared, rather than the raw transaction data itself. These statistics help the AI models deployed within the banks to expand the pool of data they are analyzing, based on the information provided by all members of the consortium.
For example, many banks and bank customers make payments to the same counterparties or beneficiaries. But any analysis or profiling of the recipient done by a bank acting alone will be based only on its own information. Another bank that needs to make a first-time payment to the same beneficiary will have no information of its own to refer to. Understanding whether any of its peers dealt with that beneficiary before and concluded that it is a trusted counterparty or a low fraud risk is extremely valuable. Collective AI enables the second bank to gain that insight and so benefit from the collective experience of its peers without compromising privacy or data security.
The results benefit everyone: the performance of the AI models operating in each of the banks that are part of the consortium is improved as insights generated across its members are fed into each separate system.
The confidentiality and security built into this system by sharing statistics rather than raw data is critical, since regulation of AI is becoming increasingly stringent. For example, in April 2021 the European Commission published proposed legislation3 to govern the use of AI-based systems, including those in use in financial institutions. Banks alongside other organizations will be required to show that their systems conform to the new legislation, establish a risk-management system, provide detailed technical documentation of their system, maintain system logs and report any regulatory breaches.