Blogs

A hand to hold: Giving Watson a body (and a soul)

Post Comment
IBM

If the first things that come to mind when you think of AI assistants are the likes of Amazon Alexa or Google Home, its time to learn about embodied cognition, AI that can physically interact with its environment. A year ago, IBM researchers did just that and brought Watson services into the physical world, namely a DIY robot pal anybody can build and program to do things such as order your Uber or play Loteria with you.

After quietly making the rounds of maker fairs and hackathons around the world, TJBot building kits recently become available to the general public for $125, possibly making them the coolest gift on your holiday list for the geek in your life, your product or dev team, or maybe just yourself when you get into the tinkering mood. 

Harnessing open source, Watson and cloud, TJBot gives you full control over the hardware and software, allowing you to put your own stamp on your creation. You have direct access to text transcripts and conversation intents, and from there, you can connect them to various third-party services ranging from Spotify to Uber to the Weather Company and bring them to life in about 15 minutes. 

Of course, the real fun is in the customization. TJBot programmers use Watson services such as Speech to Text, Text to Speech, Conversations, Visual Recognition and Tone Analyzer to build various open source recipes to share with the GitHub community. Make TJBot recognize your face using the Watson Recognition Service and grant or deny you access, such as the case with the TJ-1000 Security Bot. Let TJBot run your dishwasher using the Watson Internet of things (IoT )Platform or connect several TJBots to chat with each other. 

"I'm an open source project designed to help you access Watson services in a fun way, I can listen, speak, see, wave my arm, dance to music and shine.” - TJBot

Want to see social sentiment in action? This recipe uses the Watson Tone Analyzer service to shine TJBot’s different LED colors based on the emotions present in Twitter for a given keyword. It also uses the Twitter API to fetch tweets. 

TJBot technical lead Victor Dibia from the IBM Cognitive Environment Lab notes that the team has been working on improving vision recognition capabilities using the camera sensor, but the most important efforts are geared toward improving the learning experience and of course, dreaming up new recipes for both work and play.

See how in only 15 minutes you can download the design files and 3-D print or laser-cut your TJBot.