Open Banking / Open Finance

Platformable logo
Engage
watch10 min read
email

How AI is being used in open banking and open finance

Written by Mariam Sharia
Updated at Wed Apr 17 2024
featured image

Who should read this:

Anyone working in open banking or open finance with an interest in AI

What it’s about:

A look at some of the current AI use cases in open banking and open finance and challenges to be aware of.

Why it’s important:

There is a lot of hype around AI and its potential in a range of industries, including in open banking and open finance. Ensuring AI benefits the ecosystem is a current concern.

When it comes to AI, there’s one thing we can all agree on: there's a lot of talk. The overwhelming buzz and investor captivation around AI pushed Wall Street into a bull market, but considering the varying usefulness of AI tools and features across industries, how much of the current AI fascination is just hype?

While AI has shown promising potential in sectors like healthcare, its application in banking paints a more nuanced picture.

Icon of a blog post with

Welcome to our tech policy debates series

Who benefits from AI in open banking and open finance?

We welcome Mariam Sharia, tech policy analyst and writer, to our team to share key questions on current tech policy topics facing the open ecosystem world. While Mariam shares key risks and challenges in each piece, the Platformable team will respond in a subsequent blog post with principles, processes, techniques, and tools for you to best prepare to deal with these challenges in your own organisation.

AI in open banking isn’t new—so why the buzz? 

Financial institutions have been using machine learning software to onboard clients, assess credit risk, and identify fraud for years, while on the customer-facing side, chatbots and robo-advisors have stepped in to minimize the human aspects of customer service and portfolio management. 

Finance bros have loudly touted GenAI as a game-changer, promising to revolutionize the industry and make life easier and better for the average person. And while I agree it will transform things in a big way (though there are questions about how “good” and “easy” those changes will prove to be), it doesn’t seem to be doing that at the moment.

Customer-use cases are—at least today—more hype than help.

This is in large part because financial institutions are heavily regulated and don’t yet have the proper infrastructure for implementation, or understanding of GenAI’s risks to roll out a product that “hallucinates” (lies with confidence) and can’t explain how it comes to its decisions.

It’s also because the customer uses that are being hyped are either unhelpful, like chatbots or wealth trading apps aimed at the wealthy, or—due to the ever-blurring lines between Big Banks and Big Tech—have the potential to cause a lot of harm, like BNPL services and, in a more insidious way, fraud prevention measures.

BNPL—A force for good?

You don’t have to look far to see the ethical failings of “disruptive” fintech. Buy Now, Pay Later (BNPL) service providers had their highest-grossing day on record last year, processing $940 million in payments for online purchases on Cyber Monday.

Like credit cards, companies such as Klarna, Afterpay, and Affirm let customers buy things even if they don’t have funds at the point of sale. But unlike credit cards, these services are not tightly regulated, resulting in an exploitative market with no system of checks and balances. This “lending on steroids” makes it easy for the world’s most vulnerable, least educated people to spiral into debt, and then stack it across different providers.

Supporters argue BNPL services are a force for good, disproportionately benefitting small businesses and democratizing access to credit. While this may be true, the percentage of late payments and bad debt has over the years been steadily ticking upwards, partly because the majority of consumers are from vulnerable populations; BNPL users tend to be younger, many are people of color, and a third of them have subprime credit. They’re also “more than twice as likely to be delinquent on another credit product” and far more likely to overdraft than nonusers.

According to the Citizens Advice charity, only 11% of retailers told buyers in the U.K. they were actually taking out a credit agreement. Of the one in ten BNPL defaulters, most had no idea their information would be passed onto debt collections agencies if they missed a payment; service providers either buried these warnings in their terms or didn’t mention them at all.

Unlike the debt collection system in the United States or Australia—in which predatory agencies regularly employ aggressive, treacherous, and illegal methods, and have all but hijacked the justice system—Europe doesn’t have the same approach. BNPLs could change that, transforming into an entryway for debt collection that digs vulnerable borrowers into even-deeper financial holes. 

Any discussion of expanding access to credit should also mention one of GenAI’s biggest hurdles when it comes to banking: it’s lack of explainability. How do you explain to someone why they were denied for a loan if your AI can’t explain its reasoning?

There is no shortage of U.S. financial institutions redlining and discriminating against people of color even today, but at the very least, there is a chance for recourse. You could point to a flawed algorithm as evidence of wrongdoing. The same is not true of GenAI, which—especially if already built on biased data—could harm more people than it could help, and which doesn’t have an algorithm that explains how it ended up at its conclusions. A lack of explainability means a lack of accountability, which is unacceptable when it comes to people’s finances.

“Smart” chatbots aren’t that smart

Two of the more loudly hyped AI uses in banking revolve around chatbots and fraud prevention. The former is innovation theater—anyone who has ever “talked” to a chatbot knows they are for the most part stupid and unhelpful. Banking chatbots feel less like a streamlined tool and more like an intentional barrier to getting a human on the line.

To extend our BNPL example, Klarna last month announced they had laid off 700 customer support employees and replaced them with an AI assistant, resulting in a 25% drop in repeat inquiries and decreasing the average customer support session from eleven minutes to two. While the chatbot’s benefit to the company is clear—they predict $40 million in saved costs—its helpfulness for customers is suspect.

Software engineer Gergely Orosz called it “underwhelming,” writing that it “recites exact docs and passes me on to human support fast”—an experience mirrored by dozens of other users.

This won’t be the case forever, as algorithms learn to provide more than simple canned replies and eventually make it to other front-end use cases like personalized financial advice. But it’s certainly the case right now.

And that’s not a bad thing! As mentioned, rolling out tech that lies and has the ability to make or break people’s financial lives would be disastrous for all parties.

Lying chatbots also raise the question of liability. Air Canada was recently asked to refund a grieving passenger after its chatbot incorrectly explained the airline’s bereavement policy. The airline argued the chatbot should not be held liable because it is “responsible for its own actions.” The courts ruled in the customer’s favor, ordering a $812 partial refund, but the lengths to which Air Canada went trying to skirt liability should raise flags.

There’s no doubt banks will end up dealing with similar chatbot snafus, and even less doubt they’ll do whatever it takes to avoid having to pay out.

Fraud prevention, but at what cost?

The use of AI on the cybersecurity side is also still in its infancy, as skyrocketing bank fraud has shown. While fraud prevention isn’t strictly consumer-facing, it’s worth mentioning because the way banks go about it will affect consumers in a big way.

GenAI has made it easier and cheaper than ever for bad actors to produce text, audio and even video that can trick not just potential victims, but the latest fraud detection systems. For better or worse, banks are slow to adopt digital, and launching an attack is cheaper than building a reliable defense.

As a result, consumers lost $8.8 billion to fraud in 2022—up more than 40% from the previous year, which should raise a flag considering how banks generally try their hardest to dodge compensating victims of fraud.

CSN-Top-Frauds-2022_AI open banking article.png
New FTC Data Show Consumers Reported Losing Nearly $8.8 Billion to Scams in 2022. Source: https://www.ftc.gov/news-events/news/press-releases/2023/02/new-ftc-data-show-consumers-reported-losing-nearly-88-billion-scams-2022

This too will eventually change as banks invest more and more in AI cybersecurity. But improved safety won’t come free. It may cost you your privacy.

KYC

Banks claim KYC (“Know Your Customer”) measures like voice recognition, image authentication, and liveness checks are a legal requirement, necessary for safety and regulatory compliance. That’s true, regulators all but force banks to spy on their customers.

But this means they aren't just keeping an eye on your money anymore—they're peeking into your personal life, analyzing everything from your social media posts to the tone of your chatbot interactions and emails, gathering every detail to create hyper-detailed profiles.

KYC initiatives also rely on digital IDs, which the World Bank, Economic Forum, IMF and the UN have all pushed banks to adopt in order to expand financial access and inclusion. Connecting digital IDs with bank accounts would certainly do these things, but it's a slippery slope into intrusive surveillance, and one that becomes increasingly scary when we consider that as banks are beginning to mine data, Big Tech is pushing into banking.

Geopolitical analysis firm GIS poses an interesting question regarding these shifts: Why are so many Big Tech companies moving into banking if it opens them up to regulation? They posit that “if these companies were registered as banks, they could conveniently go on violating clients’ privacy simply by invoking Know Your Customer requirements.” What happens if BNPL providers are granted access to your your bank account?

Without robust privacy measures and oversight in the integration of AI technology within banking systems, we could become unwitting members of a surveillance state.

Wealth trading for the wealthy

The democratization of AI in finance has tremendous power to change people’s lives. Personalized coaching and budgeting, access to high-level investment advice, expanding access to credit—there are dozens of uses that would improve financial health for all people across the board. That isn’t our current reality.

Looking at the UK, currently the most advanced player in open banking, a good half of open banking apps are for wealth trading rather than budgeting, smoother payments, financial literacy, or giving marginalized communities access to credit. Popular investment platforms like Robinhood and Wealthfront heavily utilize AI algorithms to offer personalized investment advice and portfolio management services, primarily targeting wealthier demographics.

The trend appears to show that rather than improving the average person’s finances, AI in banking is being leveraged by the rich and powerful to accumulate more wealth in a more streamlined way.

Why can’t my bank tell me how expensive a house I can afford to buy without stretching myself too thin? Or whether I’m paying too much in rent based on my zip code? Why can’t it put together a plan to pay off my student debt, or figure out how much I can afford to spend on a vacation giving my savings goals? If my bank already knows the answers to these questions, why don’t I?

Partly due to technical and regulatory challenges, partly because banks are still figuring out the risks of AI and how to mitigate them, and partly because banks famously don’t love sharing data. Opening and standardizing APIs would cut into their profits, and it isn’t mandated by law in the U.S. like it is in the U.K.

Is AI in banking actually helpful or useful to customers in any way? Not yet. The most loudly publicized uses fall quite short of delivering significant value to the average person.

Since we don’t want our banks beta testing AI on us, that isn’t a bad thing. They’ll probably roll out actually-helpful features in the next five or ten years.

As we navigate the future of AI in banking, it's vital to ensure that digital advancements prioritize the well-being and empowerment of all individuals, rather than exacerbating existing inequalities. Only then can we truly harness the transformative potential of AI to create a more inclusive and equitable financial landscape.

member image

Mariam Sharia

TECH POLICY WRITERaccounts@platformable.com

Related article