AI: The Unseen Assistant

Artificial intelligence is already doing a lot for us behind the scenes, and a surge of new and better applications is on the way.


For all the fear mongering around artificial intelligence (AI) taking our jobs and ruling our lives, it has taken 70 years for the technology to get to the stage where it can perform basic human functions at scale and speed. AI can now beat professional chess players, answer customer queries, detect fraud, diagnose diseases and guide stock market investments. In fact, lot of our interactions today are already being shaped by mainstream AI without our even knowing it.

And while the world was in lockdown, it did a lot of the things socially isolated humans otherwise couldn’t: processing mortgage holidays and small-business loan applications; tracking personal protective equipment; reducing development time for a Covid-19 vaccine. Without AI, Covid-19 might have been a lot less bearable.

“I worked with some large and small organizations during the pandemic; and if they didn’t have AI, they wouldn’t have been able to respond to increases in customer enquiries,” says Toby Cappello, vice president, Cloud and Cognitive Software Expert Labs at IBM.

GM Financial, the financing arm of the automotive giant, saw live-chat requests on its mobile app soar after Covid-19 hit. An AI assistant that handled 50% to 60% of live requests was able to resolve approximately 90% of the questions without any human intervention. “I’m seeing tremendous value delivered by AI,” says Cappello. “[It is] transformational, eye-opening and surprising to many organizations.”

During the pandemic, banks tapped AI to speed up document-processing—cutting down mortgage processing from months to hours, according to Adrian Poole, director of financial services, UK and Ireland, Google Cloud. At DBS in Singapore, chatbots, which typically service more than 80% of information requests in English or Mandarin, helped consumers and corporate customers check to see if they qualified for economic relief measures, explains Jimmy Ng, group chief information officer and head of group technology and operations at DBS.

In Russia, Tinkoff Bank estimates that AI chatbots and voice robots in its call centers save the bank approximately 250 million rubles ($3.3 million) a month. And Konstantin Markelov, vice president of business technologies at Tinkoff, says that by reinforcing its antifraud systems with machine-learning models, it has cut payment fraud in half. Beyond chatbots, BBVA in Spain is using AI to more efficiently enhance cybersecurity and anti-money-laundering systems, to risk score small to midsize enterprises using transactional data, and to analyze customer interactions and communications via multiple channels so they can be dealt with more quickly and effectively.

Beena Ammanath, executive director at the Deloitte AI Institute, says AI is having the biggest impact in data-intensive industries like financial services and pharmaceuticals. A 2020 State of AI in the Enterprise report from Deloitte, points to even more novel applications of AI: “from creating the rules for new sports to composing music to finding missing children.” Startups on CB Insights’ 2021 AI 100 list use AI in everything from autonomous vehicles and beehives to waste recycling, elder care, dental imaging, insurance pricing, mineral exploration and climate risk mitigation.

These applications are a far cry from the futuristic consumer-facing inventions—flying cars and robot maids—many people expected from AI. But our expectations of the technology’s capabilities can race ahead of reality.

“I have a good understanding of what AI can and can’t do,” says Stephen Ritter, chief technology officer at San Diego–based digital identity-verification company Mitek Systems, who has worked in machine learning for more than 30 years. “The general public thinks AI is the Jetsons, robots and flying cars. That’s probably not going to happen for decades and decades.”


What exists today is mainstream, task-oriented AI, he explains, as distinct from “artificial general intelligence,” which refers to a time somewhere in the future—2050 by some accounts—when machines could become “super intelligent” and perform any task a human can.  Market intelligence and advisory firm IDC estimates that spending on AI technologies will skyrocket to $110 billion by 2024—more than doubling the estimated $50.1 billion spent in 2020.

But not everyone has boarded the AI train yet. A late-2020 survey of 167 finance organizations in North America, Europe, the Middle East, Africa and Asia Pacific by Gartner saw AI trumped by cloud enterprise resource planning (ERP) systems and advanced data analytics as CFOs’ top technology priorities. Just 13% of CFOs, according to Gartner’s survey, plan to invest in AI in the next three years, compared to 64% who will plough investment into cloud ERP systems.

So why are CFOs less bullish than other parts of the organization when it comes to AI? The biggest hindrance for them, according to Steve Adams, an analyst with Gartner, is their ability to predict, forecast and measure AI’s return on investment. “CFOs tend to think about things through the lens of dollars and cents,” he explains. “AI technology is relatively new and there are so many potential applications.” So far, the industries that have found the most use cases for AI are financial services, logistics and transportation; but that’s not to say CFOs aren’t interested, Adams says, adding that they have been actively and thoughtfully asking questions about AI technologies and applications in corporate finance.

Adams doesn’t believe we’ll see happen with AI what happened with blockchain, which only 3% of CFOs voted for as a top technology priority in Gartner’s survey. “Blockchain was going to change the world,” he says, but that turned out not to be the case. “Whether AI meets, exceeds or outperforms depends on our expectations,” he notes. “But if AI doesn’t have access to vast amounts of data, it will be difficult for it to provide truly revolutionary applications.”

Data is the grease for AI, helping it drive richer and seemingly more-accurate interactions between organizations and their customers. For example, one of BBVA’s strategic priorities is to use AI to offer customers more-personalized banking experiences based on their unique financial circumstances.

“We’ve developed forecasting models in order to anticipate their financial situation several weeks in advance,” says Álvaro Martín Enríquez, head of data strategy at BBVA. “Through these models we foresee undesired situations, like insufficient funds in an account to face a direct debit, and we bring this information to their attention together with actionable solutions.” Another AI tool developed by the bank even allows companies to learn the estimated amount of greenhouse gas emissions related to their daily activities.

From Black Box to Glass Box

The use of customer data—or any data for that matter—by AI algorithms can raise a host of regulatory, ethical and moral questions. How is the data being used, how accurate is it and how transparent is that process to the end consumer?

“For us to get more comfortable with AI, we need to have more transparency,” says Lisa Palmer, chief technical adviser at Splunk, a data software company that investigates, monitors, analyzes and acts on data. “There may be situations where people have a discomfort level caused by not knowing what they’re interacting with and how decisions are being made. This is what we mean by explainable AI: making the ‘black box’ a ‘glass box.’ I don’t think we’ll get past the social angst around AI until we have this explainability.”

While conventional AI techniques like machine learning, deep learning or neural networks define conventional AI approaches, their Achilles’ heel, says Yonatan Hagos, chief product officer at AI software-engineering company Beyond Limits, is that they cannot explain how they arrive at an answer. Hagos says cognitive AI solutions like the one Beyond Limits uses take large data sets, then apply a layer of human knowledge and business logic to provide more-accurate recommendations.

“Credit and loan candidate identification is a great example of this, where you have large quantities of data but also need to apply a certain layer of domain expertise,” he explains. Explainable AI is necessary, says Hagos, in high-value, high-risk industries like energy, health care and finance, as it provides users with transparent and interactive audit trails explaining recommended operational remedial actions.

An October 2019 report by the Bank of England and the UK’s Financial Conduct Authority regarding machine learning in UK financial services highlights potential risks around explainability, “meaning that the inner working of a model cannot always be easily understood and summarized,” and associated data-quality issues (including biased data) that the report’s authors note could negatively impact “consumers’ ability to use products and services, or even engage with firms.”

At Tinkoff Bank in Russia, Markelov says it does not use an AI algorithm as the final decision-maker in credit scoring, but incorporates a neural network (AI)-derived score. A separate model, he says, allows the bank to smooth over any outliers in AI scoring.

Ng of DBS Bank in Singapore says its virtual bank-recruiting tool, Jobs Intelligence Maestro (or JIM), which it launched in 2018 for higher-volume roles, helps remove unconscious human bias in the screening process by specifically focusing on skills required for each role. “That said, we do incorporate several safeguards, including a regular review of algorithms to ensure that we do not set in bias,” he says.

DBS also uses a data-governance framework called PURE (Purposeful, Unsurprising, Respectful, Explainable), against which it assess all its AI data-use cases. “We try to be respectful of privacy and look at all data through these four lenses,” says Ng.

Yet, he notes, privacy is subjective. “In China where there are potentially cameras everywhere, it’s probably less of an issue if you use personal data,” he explains. “For each country, it’s very different. These questions have to be asked and tailored to each country.”

Despite industry efforts to keep AI honest, some high-profile incidents have made AI bias a top regulatory and public concern. Last July, MIT withdrew a dataset that had been widely used to train machine learning models to identify objects and people in still images, because it used derogatory language to refer to women and people from minority backgrounds. In 2018, Amazon stopped using a recruitment tool that screened job applicants after it was shown to be biased against women.

Concerns have also been raised about facial-recognition technologies. Several US cities, including San Francisco and Portland, have banned its use by local government; only Portland has banned its use by private-sector entities. Regulation in the US, under the 2019 Algorithmic Accountability Act, could require companies to monitor and repair “discriminatory algorithms.” The European Commission last month announced its proposal of new regulations to ban “AI systems considered a clear threat to the safety, livelihoods and rights of people.” This would include the use of facial-recognition technologies for indiscriminate surveillance, as well as algorithms used for “social scoring” and recruitment.

If AI is developed by a diverse group of engineers, then it should counteract possible implicit bias, says Hagos. Deloitte’s Ammanath still believes a lot of good can come from AI, as long as it is thoughtful.

“Right now we’re having the right conversations around ethics, how you protect humans and what new jobs look like,” she says, noting that three years ago there were fewer such discussions.

There are also ways of solving bias in data using synthetic or artificially created data. “One way of applying synthetic data would be to identify data that is flawed (racist or homophobic) and replace the flawed elements with ‘clean’ data,” says Splunk’s Palmer. “Doing so would allow for [machine learning] models to learn based upon desired inputs versus flawed inputs. Such an approach would allow for creation of models purposefully designed for desired outcomes. For example, if a credit grantor wanted to create a model designed for racial equity versus racial equality, they could offer better loan rates and improved credit card offers to a targeted group.”

At the bare minimum, says Mitek’s Ritter, a public debate is needed around the ways in which AI is being used. “What I’d like to see is more clear-cut rules and frameworks to avoid bad outcomes,” he says. “I’d like to see governments come in and provide a framework for how we move forward. I’m excited to see what the next 10 years brings. If we can avoid some of the big mistakes, that will make the technology much better.”

arrow-chevron-right-redarrow-chevron-rightbutton-arrow-left-greybutton-arrow-left-red-400button-arrow-left-red-500button-arrow-left-red-600button-arrow-left-whitebutton-arrow-right-greybutton-arrow-right-red-400button-arrow-right-red-500button-arrow-right-red-600button-arrow-right-whitecaret-downcaret-rightclosecloseemailfacebook-square-holdfacebookhamburger-newhamburgerinstagramlinkedin-square-1linkedinpauseplaysearch-outlinesearchsubscribe-digitalsubscribe-printtwitter-square-holdtwitteryoutube