Explainable AI is Responsible AI

In Cognilytica Research’s briefing note on simMachines, January, 2018, it highlights the black box challenge of today’s AI machine learning technologies and the fundamental problems lack of explainability causes.

Increase Campaign Precision and Relevancy with Dynamic Predictive Audiences

simMachines, Inc., the leader in Explainable AI / Machine Learning applications, announces the launch of their latest product—Dynamic Predictive Audiences designed for data companies, publishers and media platforms.

Dynamic Predictive Segmentation is the Future of Marketing

Today’s digitally-empowered customers have high expectations for relevant, highly personalized customer experiences. Companies must keep pace with these growing demands or they will be left behind by their more perceptive competition.

Similarity Based Machine Learning Provides AI Transparency and Trust

Similarity is a machine learning method that uses a nearest neighbor approach to identify the similarity of two or more objects to each other based on algorithmic distance functions.

5 Market Segmentation Challenges That Are About to Be Solved

Market segmentation is one of the most basic arms of business strategy. Firms bundle customers to understand their preferences, manage relationships with them, improve product and service offerings, and assess risk.

AI Enabled Customer Segmentation Will Transform Marketing

Marketers are missing opportunities when they use static segmentation, which is not predictive of customer behavior and fails to take customer context into account.

Machine Learning with Transparency Changes the Art of the Possible in Marketing

Most marketers are just beginning to explore machine learning applications.  Machine learning is already providing tremendous analytic efficiency gains and increased precision.

Marketers Need Granular Transparency Behind Machine Learning Predictions

Demand for explainable AI over the last year has started to ramp up from a variety of perspectives. DARPA, the Defense Advanced Research Projects Agency, for example, has been calling for more explainable machine learning models that human users can understand and trust.