In Cognilytica Research’s briefing note on simMachines, January, 2018, it highlights the black box challenge of today’s AI machine learning technologies and the fundamental problems lack of explainability causes.
Similarity is a machine learning method that uses a nearest neighbor approach to identify the similarity of two or more objects to each other based on algorithmic distance functions.
Market segmentation is one of the most basic arms of business strategy. Firms bundle customers to understand their preferences, manage relationships with them, improve product and service offerings, and assess risk.
It’s well known that the “Amazon effect” is wreaking havoc on retailers’ ability to lure and retain shoppers because of their inability to predict what motivates customers to buy and to be relevant in the moment of interaction.
Marketers are missing opportunities when they use static segmentation, which is not predictive of customer behavior and fails to take customer context into account.
Most marketers are just beginning to explore machine learning applications. Machine learning is already providing tremendous analytic efficiency gains and increased precision.
Welcome to the rebirth of AI. Computational experts have taken inspiration from cognitive neuroscience for decades, but technical advances leading to the proliferation of data, efficient storage methods for it, faster processing techniques, and accessible scripting languages have brought classic algorithms to the forefront of cutting edge technology.
Machine learning helps level the playing field for all marketers if applied effectively, because it can quickly be deployed at speed and scale within critical areas of the business – such as preventing high value customers from leaving or more effectively attracting and delighting high value customers through more relevant dialogs.
Demand for explainable AI over the last year has started to ramp up from a variety of perspectives. DARPA, the Defense Advanced Research Projects Agency, for example, has been calling for more explainable machine learning models that human users can understand and trust.