Financial institutions have long relied on human judgment to calibrate systems that help spot potentially risky transactions and customers. Now, Google Cloud wants them to let its artificial intelligence technology take greater control of that process.
Alphabet’s cloud business on Wednesday announced the launch of a new AI-driven anti-money-laundering product. Like many other tools already on the market, the company’s technology uses machine learning to help clients in the financial sector comply with regulations that require them to screen for and report potentially suspicious activity.
Where Google Cloud aims to set itself apart is by doing away with the rules-based programming that is typically an integral part of setting up and maintaining an anti-money-laundering surveillance program—a design choice that goes against the prevailing approach to such tools and could be subject to skepticism from some quarters of the industry.
Its launch comes as leading U.S. tech companies are flexing their artificial intelligence capabilities following the success of generative AI app ChatGPT and a race by many in the corporate world to integrate such technology into a range of businesses and industries.
Financial institutions for years have relied on more traditional forms of artificial intelligence to help them sort through the billions of transactions some of them facilitate every day. The process typically starts with a series of human judgment calls, then machine learning technology is layered in to create a system that enables banks to spot and review activity that might need to be flagged to regulators for further investigation.
Google Cloud’s decision to do away with rules-based inputs to guide what its surveillance tool should be looking for is a bet on AI’s power to solve a problem that has dogged the financial sector for years.
Depending on how they are calibrated, a financial institution’s anti-money-laundering tools can flag too little or too much activity. Too few alerts can lead to questions—or worse—from regulators. Too many can overwhelm a bank’s compliance staff, which is tasked with reviewing each hit and deciding whether to file a report to regulators.
Manually inputted rules drive up those numbers, Google Cloud executives argue. A user, for example, could tell the program to flag customers that deposit more than $10,000 or send multiple transactions of the same amount to over 10 accounts.
As a result, the number of system-generated alerts that turn out to be bad leads, or what the industry calls “false positives,” tends to be high. Research by Thomson Reuters Regulatory Intelligence puts the percentage of false positives generated by such systems at as high as 95%.
With Google Cloud’s product, users won’t be able to input rules, but they will be able to customize the tool using their own risk indicators or typologies, executives said.
By using an AI-first approach, Google Cloud says its technology cut the number of alerts HSBC received by as much as 60%, while increasing their accuracy. HSBC’s “true positives” went up by as much as two to four times, according to data cited by Google.
Jennifer Shasky Calvery, the group head of financial crime risk and compliance at HSBC and the former top U.S. anti-money-laundering official, said the technology developed by Google Cloud represented a “fundamental paradigm shift in how we detect unusual activity in our customers and their accounts.”
For many financial institutions, ceding control to a machine-learning model could be a tough sell. For one, regulators typically want institutions to be able to clearly explain the rationale behind the design of their compliance program, including how they calibrated their alert systems. The usual line of thinking among banks and their regulators is that such systems should be tailor-made to the specific institution and its risk profile.
And while compliance experts say machine-learning-driven anti-money-laundering tools have improved over the years, their limitations have made some in the industry skeptical of their ability to substitute for a human’s capacity to figure out where the risks actually lie.