The i-1 in its full glory. It can readily balance by itself, so we have it freestanding for these pictures; normally we do keep a tether on it just to be safe. We can't just go down to Den-den town and buy a new one if it breaks after all.
The robot is human proportioned; it's human sized, with weight, power and joint movement angles and speed similar to humans. On the sensory side we have two eyes, with two cameras each (one wide-angle and one telephoto, to mimic our own visual capabilities to some degree), and we have stereo microphones for audio. The body is of course riddled with force and angle sensors. The aim is a system that can be comparable to human capabilities.
Projects include human-like balancing and walking (current bipedal robots do not really walk or balance the same way humans do and it shows),reaching, manipulation and gesticulating. I'm involved on the sensory side, doing early visual and auditory attention. Other people are working on object segmentation and recognition. And most everybody is interested in how to get the robot to learn new capabilities by itself or by example.
So why a humanoid robot? Aren't we pretty good at producing new humans already? That's perfectly true of course. And as a "robot", or worker (the original meaning), this creation would make little sense. Anything i-1 does can be done faster, safer and more reliably by your average 8-year old, and he wouldn't need half a dozen support people standing by every moment.
i-1 striking an explicatory pose. Body expressiveness doesn't just look cool; it's a communication channel. And as anyone learning a foreign language knows, when that channel disappears - talking over the phone rather than in person, for instance - communication becomes much more difficult, sometimes impossible.
Replacing a human is not the goal, of course. What we aim for is a system that can interact with us in a natural way and help us understand our own, human capabilities. To do so, we need the robot to have human-like capabilities, and that means having human weaknesses and deficiences as well as our strengths. We could have a robot with wheels instead of legs, for instance, since that is much easier and more stable, but that would not teach us anything about how human walking is done, or learn anything new about the perspective get from being walking, rather than rolling, creatures.
We're also not just looking at perception in general - we're interested in human-like capabilities specifically. If we had a whole array of microphones we could pinpoint sound source location quite accurately, and with laser rangefinders we'd have no problem getting the distance of things around us. But we humans have only two ears, not dozens, and not a single laser anywhere, so we have just the two microphones, two eyes, and we embrace the resulting loss of precision - the same lack of precision humans have.
Now, for natural human-machine interfacing it's certainly not critical that we have the exact same abilities. We humans vary quite a bit from each other after all, and we can interact well enough even with someone as different from ourselves as a dog. We do have some design leeway, in other words, and do not need to make it completely identical to get useful results.
But dogs and humans interact well in part because we've coevolved over time; dogs are by now innately very good at reading human intentions and expressions on one hand, and signaling their intentions to us on the other. Wolves or other wild relatives to dogs are not at all good at it, on the other hand, which strongly suggests dogs have acquired these abilities through breeding along with the other specifics of dogdom. So we may not need the exact same perceptual and motor capabilites on an interactive robot, but the more we diverge our abilities, the more we need to compensate for it with sensor and motor behavior that can help close that gap.
To understand human abilities on the other hand, we do want to be as close as we possibly can. Understanding humans and human abilities is of course an important goal in, but we also need that understanding so we can create the kind of reciprocal empathy and communication abilities that can close the ability gap between humans and machines.
In a way, if you want to make simple robots - machines that are very different from humans - you first need to understand how to make a very complex human-like system like i-1 in order to find out how to cross the communication gap. As long as we don't know how, we'll be stuck in our current situation where human-robot communication is not natural and fluid at all, but stilted, formal and slow. We communicate only by tightly restricted explicit commands and receive only a limited set of canned responses, with little of the immediacy and effortless understanding of body language, attention display and emotional expression.
It's a really fun project, with lots of fascinating areas to work on. Doing this as my day-job almost feels like cheating.
Hej Jan
ReplyDeleteVilken fantasieggande projekt... om jag förstår dig rätt så ni utvecklar inte så mycket robotik utan snarare simulerar människan och hennes sätt att fungera.
Som gammal journalist komm jag att tänka på att du kunde säkerligen kunna göra en slant om du vill skriva om projektet i Sverige och i Finland... Ny Teknik och Teknik och Ekonomi, t.ex.
Sami
"om jag förstår dig rätt så ni utvecklar inte så mycket robotik utan snarare simulerar människan och hennes sätt att fungera. "
ReplyDeleteMjo - fast i botten ligger ju fascinationen med att bygga en "riktig" robot, så klart. Det ena utesluter ju inte det andra, och jag tvivlar på att dom flesta i det här projektet hade varit så intresserade om inte själva robotbyggandet varit en stor del. Den här bloggposten är ju skriven från mitt perspektiv, och jag är nog en av dom som allra mest har det humanbiologiska perspektivet.
Hello, Jan. That's a beautiful robot. Your photos remind us that we express ourselves with our entire bodies, and not just the face.
ReplyDelete