December 17, 2008 weblog
MIT's Huggable Robot Teddy Enhances Human Relationships
(PhysOrg.com) -- It's probably the most sophisticated teddy bear ever designed, but that doesn't stop MIT's companion robot called "the Huggable" from being pretty adorable, as well. The Huggable is the latest project to come from the MIT Media Lab, and could one day be used for healthcare, education, and social communication applications.
As the lab explains, the Huggable is designed to be more than a fun robotic companion. Its main purpose is to enhance human relationships by functioning as a visual tool for long-distance communication. Grandparents who want to talk to young grandchildren, teachers instructing students, or healthcare providers communicating with patients could all enrich their interactions using the robot.
The Huggable features more than 1500 sensors on its skin, along with quiet actuators, video cameras in its eyes, microphones in its ears, a speaker in its mouth, and an embedded PC with 802.11g wireless networking.
"The movements, gestures and expressions of the bear convey a personality-rich character, not a robotic artifact," the MIT Media Lab's Web site explains. "A soft silicone-based skin covers the entire bear to give it a more lifelike feel and heft, so you do not feel the technology underneath. Holding the Huggable feels more like holding a puppy, rather than a pillow-like plush doll."
The Huggable connects to a Web interface that enables the remote person to not only view the person on the other end through the bear's eyes, but also view the robot's behaviors through streaming audio and video. The remote person can also control the robot using several features. A grandparent, for instance, can enter text for the robot to speak via speech synthesis or command the robot to make various sounds, such as giggling. The grandparent can then watch the child's facial reaction on the screen and listen to their response, as well as watch a 3D virtual model of the robot and an animated cartoon that indicates gestures, such as when the robot is being bounced or rocked. Overall, the robot enables the grandparent to see and hear the child through the eyes and ears of the Huggable.
The robot can operate in either fully or semi autonomous mode. The Huggable can be programmed to remember the faces of specific people, and can then track the moving faces without external control. In semi-autonomous mode, a user can use a joystick to move the robot's head vertically and horizontally.
The Huggable was originally based on the concept of therapeutic companion animals, and has important touch-based features. The robot's neural network can recognize nine different classes of touch, such as tickling, poking, and scratching, etc., and each class is further divided into six response types, such as teasing pleasant, punishment light, etc. Based on the response type, the robot interprets the intent of the touch and how to respond.
Currently, the MIT Media Lab is working to create a series of Huggables for real-world trials. The Huggable was created using Microsoft Robotic Studio, and the project is supported in part by a Microsoft iCampus grant.
More information: MIT Media Lab
© 2008 PhysOrg.com