No More Human than C-3PO

George the Robot is playing hide-and-seek with scientist Alan Schultz.

For a robot to actually find a place to hide, and then to hunt for its human playmate, is a new level of human interaction. The machine must take cues from people and behave accordingly.

This is the beginning of a real robot revolution: giving robots some humanity.

"Robots in the human environment, to me that's the final frontier," said Cynthia Breazeal, robotic life group director at the Massachusetts Institute of Technology. "The human environment is as complex as it gets; it pushes the envelope."

"Robots have to understand people as people," Breazeal said. "Right now, the average robot understands people like a chair: It's something to go around."

The places we will first see these robots that can connect with humans in a more "thoughtful" way are in the most human-oriented fields - those that require special care in dealing with the elderly, young and disabled.

As a machine, George is not a breakthrough. He's an off-the-shelf robot reprogrammed at the Navy Center for Applied Research in Artificial Intelligence, which Schultz directs.

When they play hide and seek, George doesn't hide very well, and it takes him longer to find Schultz than vice versa, but it's the fact that he does either that makes him special.

"We have only scratched the surface," said Sebastian Thrun, the Stanford Artificial Intelligence Lab director who won the Defence Department's Grand Challenge for a self-driving robot car through the desert last year. He predicted that 10 years from now robots will roam the health care system and that in our homes, multi-armed robots will be doing the cleaning. "There will be a lot of personalized devices," he says.

That's a big switch. The latest commercial home robots - the vacuuming iRobot Roomba, and its floor-cleaning cousins - are designed to work best when people leave the room. But the promise of robots for scientists is represented by Rosie, the vacuuming robot of "The Jetsons" cartoon series.

"If Rosie is going to be around and in your face, it would be good if the interaction is natural and easy," says Rod Brooks, director of MIT's artificial intelligence lab.

So after spending decades tinkering with wiring, some roboticists started studying humans, and the new field of human-robot interaction was born. Unlike the rest of robotics, many of its leaders are women. It has social scientists, language specialists, medical doctors and even ethicists who wonder if putting robots into places like nursing homes is the right thing to do.

That's a big change from 50 years ago, when the field of artificial intelligence was created at a forum at Dartmouth University. The experts focused on puzzles and chess and skipped over concepts such as perception, a sense of where you are, what's around you and how to interact.

"They all thought perception was easy - a two-year-old could do that - but smart people play chess," said Brooks, co-founder of iRobot Corp. "They all missed it and Hollywood missed it. The stuff a two-year-old could do, that's the hard stuff."

One preschooler-type skill, the ability to take someone else's perspective, "turned out to be a very important capability that we needed on our robots so that they could really work comfortably with humans," said Schultz.

Thus, Schultz hopes in the next year or so to have a robot that could, like an old-time movie detective working a case, tail a person walking through the naval research lab campus unseen.

Similarly, researchers are working on teaching language-reasoning - not just dumping a dictionary in the robot's database - gestures and eye contact so robots can understand the many ways people communicate. At NASA, astronauts are working with Schultz and a spacewalking-prototype called Robonaut to make machines understand when an astronaut points to something and says "there."

We as humans understand that, but getting robots to put those clues together is proving to be a big leap, he said. And then there are subtle clues that humans pick up without even knowing it, such as nods and eye contact.

Research scientist Candy Sidner at the Mitsubishi Electric Research Lab in Cambridge, Mass., found that people respond better to more animated robots - those that nod, move and point. So she developed Mel, a pointing, nodding penguin robot. You nod at Mel, Mel nods back.

"It's absolutely very compelling. People tell me, 'I like Mel because he's really kind of cute,' " Sidner said.

How should a robot look? There's debate on that. On one extreme are the stroke-therapy robots of MIT scientists Neville Hogan and Hermano Igo Krebs. Those look like exercise machines with video game screens. They guide the arms and legs of paralyzed stroke patients through physical therapy, and the patients don't even realize they are robots.

On the other end of the spectrum are David Hanson of Dallas and Osaka University's professor Hiroshi Ishiguro whose robots look creepily human. Ishiguro's robot Geminoid looks just like Ishiguro.

Such uncanny resemblances have led roboticists to coin the term "uncanny valley" syndrome. It suggests that people respond better to robots the closer they resemble humans - up to a point. If the resemblance is too good, people "are weirded out," Sidner said. At that point, acceptance plummets. That's why Sidner prefers her penguin robot.

Sherry Turkle at MIT worries about robots that seem too human.

"We're cheap dates," she says. "If an entity makes eye contact with you, if an entity reaches toward you in friendship, we believe there is somebody there . . . But that doesn't mean that there is. That just means that our Darwinian buttons are being pushed."

Turkle, who directs the MIT Initiative on Technology and Self, fears people will be subconsciously tricked into giving robots more credit than they deserve. Her point is that when you are sick, hurt, or elderly, "you really do want a person," not a robot.

Unfortunately, there's a shortage of people working in nursing homes and caring for old people and the disabled, said Maja Mataric, director of the University of Southern California's Center for Robotics and Embedded Systems. The average stroke victim gets 39 minutes of active exercise a day when six hours a day is needed, she said, so robots can free up the few nurses for more nurturing activities.

Mataric adjusts her robots' personalities to fit the needs of stroke patients - nurturing buddy or goal-pushing coach.

And in the case of low-functioning autistic children, they actually seem to relate better to robots than humans, Mataric said. "You'll see a child smile that has never smiled before. No one knows why it happens."

The scientists trying to engineer robots to work with humans are learning more than they expected. They have a new appreciation for our own unique abilities.

Said Deb Roy, director of MIT's Cognitive Machines Group: "It's not until you try to build a machine that does the same task (that people do) . . . that you realize how incredibly hard it is."

Let Hal test your memory

Labels: , , , , ,

This page is powered by Blogger. Isn't yours?