Blogs

Cognitive Computing: The New Frontier in Machine Intelligence

As cognitive systems such as Watson emulate modes of thought, will they extend innate, human cognition?

Big Data Evangelist, IBM

Cognitive computing is practical magic. If done well, it can simulate actual human thought, conversation, and efficacy so well that the man/machine distinction becomes irrelevant.

For this reason, cognitive computing should be regarded as the practical realization of the Turing test in the era of big data and cloud computing. The test that Alan Turing described in his seminal 1950 paper, “Computing Machinery and Intelligence,”1 was whether a machine could act indistinguishably from a human thinker. Turing didn’t predicate the test on the specifics of what sort of machine accomplishes this feat, how it does so, what types of cognition it emulates, or how specifically its performance of cognition tasks is evaluated. The machine—however constitute—passes the test if it can fool a real person into believing it thinks like a real person.

What this result amounted to, in Turing’s initial conception of the test, is a type of impersonation. As laid out in his paper, the cognitive impersonation was orchestrated through natural-language conversations between the machine in question and at least one human capable of evaluating its performance. The core evaluation criterion was whether the machine’s part of the conversation feels sufficiently natural—not programmatic or robotic—so as to appear as if it is truly emanating from the mind of an actual human.

Beyond impersonation

In the modern era of big data–powered cognitive computing, merely impersonating a thinking human is beside the point. As implemented on the IBM Watson™ system, cognitive computing emulates a wide range of conscious, critical, logical, attentive, reasoning, and evaluative modes of thought. The true test is whether it can imitate these processes well enough to assist, supplement, extend, and accelerate people’s own innate cognitive powers.

Watson passes this test with flying colors. It can interact with people to adaptively learn from them, using their responses—natural language, facial expressions, voices, gestures, and so on—to validate its own understanding of various subject domains. Furthermore, Watson can learn from its environment, ingesting fresh feeds of data and analyzing them to identify correlations, trends, anomalies, and other patterns that may be too complex or obscure for humans to discern unaided. And Watson can adapt to and learn from all these inputs—human interactions and data ingests—without explicit programming, thereby mimicking how sentient organisms autonomously make sense of the world around them.

Speculation that cognitive systems will start to offload core tasks that once required highly trained human experts is not far-fetched. As Paul Roma of Deloitte Innovation noted in a recent CIO Journal article,2 IBM is working with the Cleveland Clinic to train Watson to become board-certified in medicine. And one can easily imagine scenarios in which Watson and similar cognitive systems might also be certified as lawyers, accountants, financial planners, and other skilled professionals.

Above-average wisdom

But rest assured that no one will ever mistake Watson for the family doctor. No matter how smart they get, there’s little chance that Watson or any other cognitive system will ever have the final say on life-and-death matters. Nevertheless, eventually using cognitive systems for prescreening patients and also for rendering second opinions is quite likely. In those scenarios, performance of cognitive systems can be evaluated by whether they pronounce judgments that, all things considered, are as good or better than the average skilled human would have made.

Please share any thoughts or questions in the comments.

1Computing Machinery and Intelligence,” By A. M. Turing, Mind, 1950.
2Cognitive Computing Roundtable Interview, Part I,” Deloitte Insights for CIO Journal sponsored content, CIO Journal, September 2014.