CIO Journal Coverage
According to CIO Magazine, May 2nd, 2018, major brands are intensely searching for explainable AI solutions for several reasons spanning guarding against ethical and regulatory breaches to understanding when to override machine based decisions or how to improve them in the future. Explainable methods bring real value to businesses in the near term and over the long haul.
There are multiple ways companies are trying to address this problem since it’s such a critical need. However, these approaches often require business users to sacrifice precision for explainability by simplifying the model. Or the explanation is a bolt on attempt to provide some degree of inferred prediction drivers. simMachines similarity based proprietary method doesn’t require users to sacrifice precision for explainability or be satisfied with bolt on explanations. With simMachines, the prediction and the explanation are one and the same.
Similarity as a method is, in itself, a very valid answer to the explainability conundrum. Similarity is a highly accurate method, that provides explanations with every prediction based on nearest neighbor objects. However, it hasn’t historically been used commercially due to the difficulty with scaling the approach to handle the volume of data and speed required by large scale businesses.
Now that these limitations have been solved through advanced proprietary techniques that enable unlimited scale, it is a viable and immediate method to consider, in providing explanations for machine learning decisions, even when other algorithms are being used.
Mirroring other Algorithms with Similarity
To achieve AI explainability for neural net, decision tree or deep learning algorithms at a local predication level where a business is happy with performance, but there isn’t an ability to explain the prediction outputs, is to mirror these method’s predictions with a similarity based algorithm. simMachines similarity method can ingest another algorithm’s predictions as an input along with the underlying data set and then point directly at generating the same prediction output with the explainable factors it naturally generates as an output. This approach can be applied across an enterprise’s machine learning algorithms to create and store explainability factors for ethical and regulatory compliance purposes without interrupting or replacing existing algorithms.
“Explainability as a Service”
This concept is a valid approach for any enterprise interested in allowing for flexibility in methods and approaches to extend across their enterprise with a supporting “Center for Explainability” established to maintain corporate business standards tied to responsible uses of AI. Business analysts and compliance officers can leverage this centralized resource to maintain ethical and legal standards as well as spot new emerging trends and patterns as well as understand algorithmic performance across the business. Large scale businesses leveraging AI extensively across their organization should consider this type of approach as one that is practical, consistent and available today.
Starting with a proof of concept to apply mirroring to existing algorithms is an easy way to determine if this approach will work for your enterprise. New algorithms can be added to the process and reporting and analysis tools layered on top for monitoring and insight generation. simMachines is able to support large scale enterprises with “Explainability as a Service” today.