Skip to main content
Industry

Advocating for fairness and transparency in financial services AI

woman working on her laptop in an office

Financial services institutions are increasingly using artificial intelligence (AI) to automate and augment their decisions. But when it comes to which AI algorithms to use, fairness and transparency must factor into the equation.

Black box AI: a cautionary tale

Some of the algorithms under heavy consideration today are called “black box” algorithms. While black box algorithms are very accurate, they don’t allow for any interpretation for how results are attained. To illustrate the dangers and benefits of these models, take, for example, a model that can predict whether someone should be given a loan. A black box model could predict whether or not a person should be given a bank loan. It’ll be very accurate in determining default risk on the loan, but if anyone ever asked why the loan was denied, the only response available would be, “We’re not sure why, but we do know that this particular case will become a liability for us down the road.”

While black box models are an area of enormous innovation and progress, regulators typically ban financial services companies from using them. The industry is fortunate to have guidance and regulatory oversight that inhibits the development of black box AI innovation. When it comes to collecting data on individuals or populations, combined with Machine Learning (ML) models development, companies must always consider harm prevention, and financial services regulators see to that.

The problem, however, is that for many non-regulated industries, black box models may continue to abound, as their high degree of accuracy poses a tempting consideration. The reality is, using black box models exposes any company to major risks.

AI models under examination

In order to preserve the integrity of the data science profession and safeguard society as a whole, we need to evolve beyond black box models to create interpretable and explainable models that are actionable and high-performing. These methods will allow financial services to:

  • Mitigate bias.
  • Understand why models are performing the way they are.
  • Build pipelines that are open, understandable, and explainable.
  • Make clear to regulators exactly how they’re using ML.
  • Give guarantees to regulators of fair risk assessment practices.

Let’s take a closer look at the fight for fairness and transparency in AI. First, a quick primer on the two types of AI we want to encourage firms to pursue: interpretable AI and explainable AI.

Interpretable AI

Interpretable AI works through cause-and-effect. Given Data Input 1 and Data Input 2, we can predict the results to be Output A, Output B, and Output C, but we do not know how the model arrived at those outputs. Nevertheless, this method gives us the ability to interpret results and extrapolate codes.

One example of interpretable AI is a credit score. Working with known data input—i.e. a person’s family structure, salary, credit history—we can predict what their credit score will be. A person can build interpretability into their existing black box model through code if they’re willing to apply the methodology.

Explainable AI

Explainable AI refers to models where a person can explain why the model is predicting what it’s predicting, in a format understandable by humans. In other words, we can explain the results because we know the path the model is taking to get to the results.

A classical model for this is linear regression used in mathematics for a hundred years. When using a linear regression model, we have the ability to know exactly where we’ll arrive. Explainability has to be built into models from the start. It can’t be retrofitted. One example from Microsoft is EBM, the Explainable Boosting Machine. This model uses deep neural network techniques to revive traditional GAMs (Generalized Additive Models).

Considerations for financial services undertaking AI

At a high level, what are two or three key considerations for financial service organizations who want to use AI and ML today to improve products, optimize operations, and better serve customers?

  1. Collecting and securing data

The first consideration for any firm looking to use responsible AI is around collecting and securing data. When it comes to building an AI model, very often, the more data, the better—but it’s possible to collect the wrong or incomplete data. Organizations must determine what kind of data they need, whether they’re allowed to collect it, privacy regulations around it, and any potential negative impact in terms of PR if any breach occurs. When it comes to securing data, organizations must consider who needs access to it and how to make sure it’s secure. Data governance solves for these considerations, but data is a whole new universe when it comes to AI.

  1. Extracting value from data, responsibly

The second consideration is figuring out how to get actual value out of the model being built—and therefore the underlying data used to build it. Often, companies will come up with an idea and then say, can we build this, or can we build a model that could predict X, Y, or Z? But they very rarely ask, should we build this?

  1. Cultural readiness for AI

Thirdly, AI is also culturally driven. If a company isn’t ready for AI, if they haven’t laid the groundwork for this type of innovation in their infrastructure and among their employees, then AI won’t be embraced. Firms can’t just hire data scientists and hope for the best. They have to create a culture that welcomes innovation, is tolerant of experimentation, and is hungry for meaningful data capture in a responsible way.

Next steps for driving AI in financial services

Microsoft is an advocate for a Responsible AI approach that is guided by the following principles: Fairness, Reliability and Safety, Privacy and Security, Inclusiveness, Transparency, and Accountability in AI. Are you interested in the conversation about responsible, ethical AI in financial services today?