What is Explainable AI (XAI)?
The black box challenge surrounding machine learning has been discussed at length for years for one main reason – the need for trust. Why is the machine making this decision? On what basis is this decision being made? It’s very uncomfortable, let alone dangerous, to not know these answers if you are making important business bets on a machine’s decisions that you don’t thoroughly understand. This is where the demand for Explainable AI (XAI) originates.
As David Gunning, head of DARPA (Defense Advanced Research Projects Agency) stated in his comprehensive overview on the need for Explainable AI in August 2016:
● We are entering a new age of AI applications
● Machine learning is the core technology
● Machine learning models are opaque, non-intuitive and difficult for people to understand
DARPA has since funded $6.5M for development of explainable AI alternatives among a group of academics.
Explainable AI, simply put, is the ability to explain a machine learning prediction. Many substitute a global explanation regarding what is driving an algorithm overall as an answer to the need for explainability. And some equate regression modeling to machine learning since it can provide a set of explainable factors behind a prediction, but regression modeling is not the same as machine learning. Explainable AI (XAI) is any machine learning technology that can accurately explain a prediction at the individual level.
Explainable AI (XAI) is any machine learning technology that can accurately explain a prediction at the individual level.
Why is Explainability Important?
Limitations of Common Machine Learning Applications
Traditional AI includes a variety of different approaches to machine learning such as:
Decision trees
Neural networks
Deep learning
In each case, the complexity of the method itself removes the ability to carry through the original factors that drive the algorithm’s predicted outcome. Of course, regression models provide an explanation but as previously stated, they aren’t machine learning and suffer from lower precision relative to their machine learning counterparts.
Explainable Machine Learning Approaches
Researchers are looking for ways to retroactively “bolt on” explanations to machine learning predictions, which has proved very challenging. One concerning factor involves the accuracy of the explanation fit to the actual prediction? In other words, the factors for each prediction are weighted differently, and isolating those differences has to be done one prediction at a time based on the input data.
The bottom line is, this is difficult, slow and cumbersome. Yet, many organizations still employ these explainable machine learning tactics. Two common examples of these approaches are LIME and Graph Technology, both of which generate explanations after the predictions have been created.
The only explainable machine learning method where predictions and explanations are developed simultaneously is Similarity. In other words, the explanation is not a bolt on, but an inherent part of the method used to create the prediction. In the case of similarity, the drivers of the predicted outcome are defined by known similar objects. Historically, this method has been difficult to scale and, although being highly accurate, has not been widely used. However, recent breakthroughs in the scalability of Similarity have ignited a newfound interest in its commercial viability.
The Value of XAI vs. Black Boxes Approaches
First there is the ethical, compliance and visibility value considerations of XAI. The following are some of the ways XAI supports businesses in a safe way that provides enterprise value in its own right.
In terms of value for business applications, here are a few ways that XAI benefits organizations in areas such as marketing, fraud and anomaly detection.
Examples of Explainability (XAI) in Action
Customer Churn
Fraud Detection
Detecting Illegitimate Transactions
XAI enables actionable business decisions, transparency, trust and contextual relevance. It enables the ability to meet legislative and regulatory requirements. Given the plethora of benefits, companies should design XAI in their machine learning business applications from the beginning. That way, they never have to replace what they are building today using highly advanced technologies.