Explainable AI

What is Explainable AI (XAI)?

The black box challenge surrounding machine learning has been discussed at length for years for one main reason – the need for trust. Why is the machine making this decision? On what basis is this decision being made? It’s very uncomfortable, let alone dangerous, to not know these answers if you are making important business bets on a machine’s decisions that you don’t thoroughly understand. This is where the demand for Explainable AI (XAI) originates.

As David Gunning, head of DARPA (Defense Advanced Research Projects Agency) stated in his comprehensive overview on the need for Explainable AI in August 2016:

● We are entering a new age of AI applications
● Machine learning is the core technology
● Machine learning models are opaque, non-intuitive and difficult for people to understand

DARPA has since funded $6.5M for development of explainable AI alternatives among a group of academics.

Explainable AI, simply put, is the ability to explain a machine learning prediction. Many substitute a global explanation regarding what is driving an algorithm overall as an answer to the need for explainability. And some equate regression modeling to machine learning since it can provide a set of explainable factors behind a prediction, but regression modeling is not the same as machine learning. Explainable AI (XAI) is any machine learning technology that can accurately explain a prediction at the individual level.

Explainable AI (XAI) is any machine learning technology that can accurately explain a prediction at the individual level.

Explainable AI (XAI)

What is it and why is it needed?

Why is Explainability Important?

Explainability Important 03

In its coverage of XAI, Cognilytica Research has provided an in depth summary on simMachines as well as other providers. The researchers state in the briefing,

“The more we involve AI in our daily lives, the more we need to be able to trust the decisions that autonomous systems make. However, it’s becoming harder and harder to understand how these systems arrive at their decisions. Cognilytica believes that Explainable AI (XAI) is an absolutely necessary part of making AI work practically in real-world business and mission-critical situations.”

Cognilytica

Explainability Important 02

Forrester’s Brandon Purcell comments in his research paper, “The Ethics of AI: How to Avoid Harmful Bias and Discrimination, March, 2018,” that;

“By their very nature, machine learning algorithms can learn to discriminate based on gender, age, sexual orientation, or any other perceived differences between groups of people.”

Ensuring that discriminatory factors, including surrogate data for these factors such as geography, education level, etc. don’t unintentionally cross an ethical line, requires explainable AI.

Brandon Purcell

Explainability Important 01

Leading consulting firms and market analysts such as Accenture, Forrester, and Cognilytica have commented on the need for “responsible AI” and “ethical AI.” In their blog post “Why Explainable AI must be Central to Responsible AI, July 2017,” the authors comment;

“At Accenture, we believe that a responsible AI deployment should incorporate the concept of “Explainable AI” if a company aims for its AI to be honest, fair, transparent, accountable and human-centric and that ongoing investments from the public and private sectors are essential in order to make Explainable AI a reality now.”

Deborah Santiago & Teresa Escrig

Limitations of Common Machine Learning Applications

Traditional AI includes a variety of different approaches to machine learning such as:

Decision trees


Neural networks


Deep learning


In each case, the complexity of the method itself removes the ability to carry through the original factors that drive the algorithm’s predicted outcome. Of course, regression models provide an explanation but as previously stated, they aren’t machine learning and suffer from lower precision relative to their machine learning counterparts.

Explainable Machine Learning Approaches

Researchers are looking for ways to retroactively “bolt on” explanations to machine learning predictions, which has proved very challenging. One concerning factor involves the accuracy of the explanation fit to the actual prediction? In other words, the factors for each prediction are weighted differently, and isolating those differences has to be done one prediction at a time based on the input data.

The bottom line is, this is difficult, slow and cumbersome. Yet, many organizations still employ these explainable machine learning tactics. Two common examples of these approaches are LIME and Graph Technology, both of which generate explanations after the predictions have been created.

The only explainable machine learning method where predictions and explanations are developed simultaneously is Similarity. In other words, the explanation is not a bolt on, but an inherent part of the method used to create the prediction. In the case of similarity, the drivers of the predicted outcome are defined by known similar objects. Historically, this method has been difficult to scale and, although being highly accurate, has not been widely used. However, recent breakthroughs in the scalability of Similarity have ignited a newfound interest in its commercial viability.

The Value of XAI vs. Black Boxes Approaches

First there is the ethical, compliance and visibility value considerations of XAI. The following are some of the ways XAI supports businesses in a safe way that provides enterprise value in its own right.

In terms of value for business applications, here are a few ways that XAI benefits organizations in areas such as marketing, fraud and anomaly detection.

Examples of Explainability (XAI) in Action

Customer Churn

Using churn predictions with “the Why,” a mobile phone company was able to apply dynamic predictive segments of customers predicted to churn, to reduce its churn rate by 30% year over year.

Fraud Detection

Using XAI, a global e-commerce fraud platform provider was able to increase its fraud detection accuracy by up to 50%, reduce review rates by up to 16% and uncover new emerging fraud patterns.

Detecting Illegitimate Transactions

Using XAI, this top trading exchange needed to identify illegitimate trades with greater precision. A 50% increase in detecting illegitimate transactions, along with a 30% reduction in false positives was achieved as well as an explanation for why transactions were flagged as illegitimate.

XAI enables actionable business decisions, transparency, trust and contextual relevance. It enables the ability to meet legislative and regulatory requirements. Given the plethora of benefits, companies should design XAI in their machine learning business applications from the beginning. That way, they never have to replace what they are building today using highly advanced technologies.