Cognitive computing: Programming the artificial mind

Post Comment
Big Data Evangelist, IBM

Mind is a terrible thing to waste CPU cycles. The performance of cognitive computing applications depends on the extent to which you have the most efficient algorithms, have parallelized them to the max and have a high-performance big data and analytics platform on which to run them.

Artificial intelligence is all about invoking computational models that emulate what homo sapiens do naturally. Entrepreneur Elon Musk refers to this darkly as "summoning the demon." I chuckle at that phrase, and note its eerie parallel to "invoking the daemon," where the former is what APIs enable and the latter is, per Wikipedia "a computer program that runs as a background process, rather than being under the direct control of an interactive user."

I also note its parallel with the "calculated picture of mind" invoked in this poem, which I composed about 10 years ago (and in which I hold copyright):


One ought to thank Planck for the thought
the infinitesimal's not
a fathomless bottomless well
but a plot of versatile dots.

One ought to toast Hearst for the screen
that lays down the points in a clean
mist of crisp pixie light and strips
by the millions milled by machine.

And nod to Turing for blurring
the point where the strip takes sense and
base elements assemble the
cells and scenes and trick behind
the calculated picture of mind.

—James Kobielus

When you want to take artificial intelligence out of the realm of imagination and poetry and bring it squarely into practical reality, you need computational tools. And if you want the workload scalability of "millions milled by machine," the tools need to help your cognitive application developers to write the leanest models possible. They need development frameworks, languages and libraries for building and tuning neural networks and other cognitive constructs for most efficient parallel processing of the "plot of versatile dots," which could be individual data inputs or the outputs of the nodes within a vast artificial neural network.

You can acquire these tools from various sources, though some may be better for "blurring the point" where the cognitive algorithms mathemagically begin to ace the fabled Turing test. I call your attention to IBM Watson Developer Cloud, which provides developers with access to field-proven languages, algorithms and data, as well as a cloud platform on which to execute the models developed.

Another resource worth exploring is this recent article, which describes a dozen other cognitive tools and open-source projects. Among them is the Blue Brain Project, in which IBM is partnering with a Swiss research university, École Polytechnique Fédérale de Lausanne (EPFL). Under the project, EPFL has built a neurologically inspired massively parallel "virtual brain" in a supercomputer, which, in this case, is the IBM-supplied BlueGene/L.

What you notice in all this diversity is that there are as many approaches for building computational applications that learn from data and automate cognitive processes as there are for building traditional application logic. If you're interested in seeing a related trend, the move toward probabilistic programming languages, I also urge you to check out this recent post I wrote.

What these resources show is that there are many ways to liberate the artificial mind that’s pent up inside your cognitive computing infrastructure. It’s a vessel with contents as big as all the data, as sophisticated as all the analytics and agile as all the programming wizardry you pour into it.

Like any potent pixie, it will grant you an unlimited number of wishes, as long as you know how to rub its lamp the right way.