Blogs

Is “embodied cognition” the future of AI?

Senior Story Strategist, IBM

As happens so often, IBM is quietly laying the groundwork for the future. 

A recent step toward that future is TJBot, an unassuming, do-it-yourself cardboard robot that opens a window into what AI researchers are calling “embodied cognition.” Historically, the notion of embodied cognition referred exclusively to living creatures, in particular to the notion that a creature’s physical presence has bearing on its cognition. Scientific American describes it as “the idea that the mind is not only connected to the body but that the body influences the mind.” Those influences on our minds come from our systems of perception, movement and interaction, which taken together add up to a set of deep-seated assumptions about how the world works. 

What does that have to do with a cardboard robot? For starters, it’s no ordinary cardboard robot. TJBot combines some of the most innovative IBM technology including Watson and IBM Cloud. It’s powered by a Raspberry Pi; is trained through machine learning; and can include a servo-powered arm, a camera, and a microphone, which can tie into Watson’s speech-to-text capabilities. (IBM is making the bots available to anyone interested in putting them through their paces.) 

The robot isn’t just an effort to combine and democratize access to the various technologies. It’s also an attempt to understand how that technology functions — and how it changes — in the real world. As Maryam Ashoori of IBM has noted, TJBot moves us closer to a world where sophisticated cognition becomes independently mobile and able to influence the physical world. TJBot and the bots of the near future are beginning to master independent physical action and exploration — moving from cognitive drones to caregiver robots for the elderly. 

Grady Booch and Chris Codella of IBM are demonstrating just how import that distinction is. Their recent keynote at SpaceCom pointed out that with embodied cognition, we "take the ability of [a] system to understand and reason, and draw it closer to the natural ways in which humans live and work." In other words, if a robot is capable of acquiring and integrating new information as it explores and alters its environment, then its cognition will inevitably be colored — as ours is — by what it can and can’t perceive and do. 

As more physical capabilities and more cognition come online within bots, we’ll begin to ask a new set of questions, ranging from the concrete to the abstract:

  • How do we integrate GPS and motion data to make better guesses about the kind of cognition a user needs at a given moment? For example, in a given context, does “right” mean correct or the opposite of left?
  • How do we create AI backbones that can quickly identify contexts and switch between them? Inevitably, bots out in the real world will face a wider array of contexts than an AI that’s stuck in a home or office. In those real-world environments, how does a given command get routed correctly to navigation versus threat assessment versus shopping versus French translation versus the cognitive life coach?
  • How can we deploy and manage so-called swarms of bots in ways that balance coordination with dynamic learning? Can a swarm of drones searching for earthquake survivors adapt in real time to an aftershock quake that disables half the fleet?
  • For bots relying on black-box approaches such as deep learning, how will we assess their formation of concepts and categories based on interactions with the environment? How would a bot with a different form factor and different sensors create concepts and categories differently? 

The principles of embodied cognition tell us that the answers to those questions will depend to a great degree on the embodiment of the particular agent: its form, mobility, instrumentation, and its ability to manage its surroundings, whether those surroundings are a post office, the inside of a nuclear reactor or the surface of Mars. In other words, cognitive robots will begin to learn as we do, by acting on the world itself within peculiar sensory and physical constraints. Those constraints shape not just what we know, but who we are.

From the cardboard to the cosmic, IBMers are eagerly exploring this open terrain in AI research. These explorations may well lead us to the most intriguing question of them all: as our robots start to learn about the world, what will we start to learn about ourselves?

Visit us to learn more about IBM's latest research into AI and cognitive computing.