Blogs

Simulating customer cognition with or without neuroscience

Post Comment
Big Data Evangelist, IBM

The term “artificial” simply means “created by people.” It doesn't necessarily mean “thoroughly resembling people.”

With that as context, I prefer to think of “artificial intelligence” (AI) as referring to “intelligence created by people,” not necessarily as “intelligence thoroughly resembling people’s.” It’s an important distinction to keep in mind when speculation turns to the issue of whether machines will ever be fully intelligent. If we define “intelligence” as the ability to perform various types of cognitive processing, without raising the “thoroughly resembling” bar above it all, one might argue that it’s already a settled fact, judging by the progress in AI and cognitive computing technologies.

But if you raise the bar too high, one might argue that 100 percent full-fidelity AI is very far in the future. In order to resemble human intelligence to the finest detail, we would need to be able to build from scratch perfect technological replicas of the human organism, paying special attention to their nervous systems (central and peripheral) plus organs of sensation, communication and grasping. It would be naïve to think that we owe our advanced intelligence solely to the gray matter that resides in our crania, rather than to that in conjunction with our binocular vision, vocal tracts and prehensile hands.

Functional simulation, not literal cloning, is the heart of AI. That’s why I’m glad that the latest popular retelling of the Alan Turing saga includes the words “imitation” and “game” in its title. The point of AI is to imitate human intelligence well enough that you can fool actual humans into thinking they’re dealing with one of their own species, rather than simply a sophisticated artifact created by their species. So don’t confuse AI pioneer Turing (a mathematician crafting a theoretical framework for automated information processing) with the fictional rogue scientist Victor Frankenstein, who spawned a grotesque humanoid in his secret laboratory.

So I have to laugh when people say things like “clone your customers' brains in a mathematics laboratory,” per this recent article. Not only does this phrase sound like an episode of “Outer Limits,” but implies that, on some level, you need to model the actual physical components of the brain, rather than simply the higher cognitive, affective and sensory faculties of the mind. The cited article takes the unfortunate step of showing an illustration of a brain that mixes physical organs (such as basal ganglia) with functional capacities (like behavioral biasing). All of this seems to imply that you can’t truly predict whether your customer will take a specific action (for example: renew) if you don’t have some sort of ghoulish map of what’s literally inside their head.

Most cognitive computing professionals see no earthly need to model intelligence down to the basal ganglia level in order to make high-quality predictions of human behavior. Let’s take a breath and remind ourselves what Turing’s “imitation game” was all about: simulation of specific human behaviors, not fine-grained engineering of machines to mimic every possible human behavior. It was about engineering an information processing system so that its performance on cognitive tasks is identical to how an average human might predict that another bona fide human would perform. Hence, it would, in theory, enable a machine to impersonate a human reasonably well under specific constrained scenarios (with the impersonation contingent on the other party not seeing what’s “behind the curtain” conversing with then).

As to the matter of “clone your customers’ brains,” only a fraudster would want to build an algorithm to impersonate customers. Instead, what most organizations prefer to do is to simulate customers’ decisions and behaviors under various specific scenarios. To his credit, the article’s author, Shivanku Misra, brings it down to this level within the discussion, stating that “based on past consumer behavior and current research (including neuromarketing research) we can put together a series of consumer behavior algorithms and create a statistical model that could mimic the way our consumers make a choice.”

That’s not cloning, even in a metaphorical sense. Rather, it’s more akin to the “720-degree customer view” that I discussed in this post. What that refers to is a composite portrait that includes someone’s external behavioral journey (buying, consuming, influencing, churning and so on) plus an ever-deepening portrait of the internal "journey" of experiences, propensities, sentiments and attitudes that drive them. In amassing an ever more intimate 720-degree view, your objective is usually to improve predictions of customer decisions and behaviors. You don’t need an X-ray of their physical innards to accomplish that.

What that comes down to is a discipline known as “decision science,” which draws on AI, economics, cognitive studies and other behavioral sciences. However, the discipline of neuromarketing research seems to be taking decision science to a creepy new level of invasiveness. It’s founded on the notion that mixing neurology with psychology is the key to getting inside the customers’ heads, by way of their nerve endings. As described here, it refers to "marketing research that studies consumers' sensorimotor, cognitive and affective response to marketing stimuli."

Neuromarketing seems to bring AI down to the level of Pavlovian or Skinnerian conditioned responses, leveraging functional magnetic resonance imaging, electroencephalography, steady state topography and other technologies to measure brainwaves and other physiological states associated with customer responses, sentiments and propensities. I assume that practitioners in this field are also using wearable devices to pull even more intimate data from inside customers’ mind-body matrices.

Do marketers really intend to gather a lot of low-level biometric data from customers, even if privacy sensitivities might stop them cold? And do marketers really intend to leverage it to build scientifically valid portraits of their customers’ neurological and psychological plumbing? Obviously not. You can predict customer propensities quite well in many scenarios with external behavioral data alone. And you can build high quality cognitive models of their decision processes without knowing if your advertisements cause them to salivate uncontrollably.

Insight into people’s minds doesn’t mean you’re literally looking into their heads. The most important pieces of the customer cognitive puzzle are on the surface, if you know how to fit them together.