Automation meets finance.

Author: Graham Buck

Banks are already devoting money and resources to artificial intelligence (AI), but the really big spending is likely to come over the coming decade. Analysts at IHS Markit predict that by 2030 the business value of AI in banking will reach $300 billion, against an estimated figure of $41.1 billion last year.

The group’s assessment also appears to break with optimistic reports of how the advent of AI technology will liberate the banking sector’s employees, taking over routine manual tasks and freeing-up their time for more valuable work. Markit believes that millions of jobs worldwide will simply become redundant, with 1.3 million US workers and 500,000 in the UK no longer needed. That’s on top of the risks of programmers’ biases sneaking into the software.

To be sure, AI delivers benefits. Don Tait, Markit’s principal analyst, believes that AI technology will make the banking sector “more humane and intelligent.” AI software such as IBM’s Watson is already being deployed to eliminate malpractices such as rogue trading.

However, a question increasingly asked is whether the new AI-centric banking world can be free of bias and assumptions prevalent in the pre-automation era. Undesirable bias is created when an AI solution reflects the values of its human designer, rather than reality.

Take, for example, a mortgage application. Even with recent efforts to break down gender bias, too often banks will assume a male applicant is better placed than a female applicant on an identical salary to meet the regular repayment, suggests Stephen Brobst, technology officer at data analytics group Teradata.

There is a significant risk that these prejudices will persist as the AI revolution takes effect in the 2020s, despite bias in AI solutions being ruled illegal in the European Union’s General Data Protection Act (GDPA) and also the California Consumer Privacy Act (CCPA) and New York City’s Algorithmic Accountability Bill.

Correlations in a large data set may not be obvious, so there is the potential for bias being introduced inadvertently. As Brobst observes: “An algorithm is only as good as the data we use to train it.” The question is how many banks embarking on major AI initiatives will take this into consideration.