Yea, next Robots That Teach Each Other
What if robots could figure out more things on their own and share that knowledge among themselves?
Availability: 3-5 years
Many of the jobs humans would like robots to perform, such as packing items in warehouses, assisting bedridden patients, or aiding soldiers on the front lines, aren’t yet possible because robots still don’t recognize and easily handle common objects. People generally have no trouble folding socks or picking up water glasses, because we’ve gone through “a big data collection process” called childhood, says Stefanie Tellex, a computer science professor at Brown University. For robots to do the same types of routine tasks, they also need access to reams of data on how to grasp and manipulate objects. Where does that data come from? Typically it has come from painstaking programming. But ideally, robots could get some information from each other.
Robots Teaching Robots
- BreakthroughRobots that learn tasks and send that knowledge to the cloud for other robots to pick up later.
- Why It MattersProgress in robotics could accelerate dramatically if each type of machine didn’t have to be programmed separately.
- Key Players in Advanced Robotics– Ashutosh Saxena, Brain of Things
– Stefanie Tellex, Brown University
– Pieter Abbeel, Ken Goldberg, and Sergey Levine, University of California, Berkeley
– Jan Peters, Technical University of Darmstadt, Germany
That’s the theory behind Tellex’s “Million Object Challenge.” The goal is for research robots around the world to learn how to spot and handle simple items from bowls to bananas, upload their data to the cloud, and allow other robots to analyze and use the information.
Tellex’s lab in Providence, Rhode Island, has the air of a playful preschool. On the day I visit, a Baxter robot, an industrial machine produced by Rethink Robotics, stands among oversized blocks, scanning a small hairbrush. It moves its right arm noisily back and forth above the object, taking multiple pictures with its camera and measuring depth with an infrared sensor. Then, with its two-pronged gripper, it tries different grasps that might allow it to lift the brush. Once it has the object in the air, it shakes it to make sure the grip is secure. If so, the robot has learned how to pick up one more thing.
The robot can work around the clock, frequently with a different object in each of its grippers. Tellex and her graduate student John Oberlin have gathered—and are now sharing—data on roughly 200 items, starting with such things as a child’s shoe, a plastic boat, a rubber duck, a garlic press and other cookware, and a sippy cup that originally belonged to her three-year-old son. Other scientists can contribute their robots’ own data, and Tellex hopes that together they will build up a library of information on how robots should handle a million different items. Eventually, robots confronting a crowded shelf will be able to “identify the pen in front of them and pick it up,” Tellex says.
Projects like this are possible because many research robots use the same standard framework for programming, known as ROS. Once one machine learns a given task, it can pass the data on to others—and those machines can upload feedback that will in turn refine the instructions given to subsequent machines. Tellex says the data about how to recognize and grasp any given object can be compressed to just five to 10 megabytes, about the size of a song in your music library.
Tellex was an early partner in a project called RoboBrain, which demonstrated how one robot could learn from another’s experience. Her collaborator Ashutosh Saxena, then at Cornell, taught his PR2 robot to lift small cups and position them on a table. Then, at Brown, Tellex downloaded that information from the cloud and used it to train her Baxter, which is physically different, to perform the same task in a different environment.
Such progress might seem incremental now, but in the next five to 10 years, we can expect to see “an explosion in the ability of robots,” says Saxena, now CEO of a startup called Brain of Things. As more researchers contribute to and refine cloud-based knowledge, he says, “robots should have access to all the information they need, at their fingertips.”