As researchers at Worcester Polytechnic Institute (WPI) make progress giving their humanoid robot more autonomy, they're getting ready to get their hands on an upgrade robot this fall.
The teams are working to put their robots through a series of tasks -- climbing ladders, walking over rubble, opening doors and driving cars -- designed to ultimately lead to robots capable of working with humans after natural or man-made disasters.
The 11 finalists, which are competing for a $2 million prize, include teams from WPI, MIT, Virginia Tech and NASA's Jet Propulsion Laboratory.
WPI, and the rest of the teams using one of Boston Dynamics' Atlas robots, are going to have to do without their six-foot, 330-pound humanoid robot, dubbed Warner, for the entire month of October.
Matt DeDonato, the WPI team's technical project manager told Computerworld that Boston Dynamics, which built the robot that many of the teams in the DARPA challenge are using, will take their robots back at the beginning of October and spend the month upgrading them from the knees up.
"It won't look much different, but a lot of the systems will change," said DeDonato. "We'll be getting two to three computers on board the robot. Now, they're mostly off board the robot...."
As of now, there's just one processor inside the robot. Most of the computing is done by off-board computers with data sent back and forth on a fiber optic cable tethered to the robot.
In the final challenge, there will be no fiber optic tether. That means the roboticists will go from having a 10 Gbps cabled link to talk to the robot to a wireless link that only gives them about 300 Mbps.
With added onboard processors, the robot will be doing more of the calculations and decision-making itself, relieving the need to send as much information back and forth to the machine's handlers.
"Now, we have to migrate our software from the off-board computers to the on-board computers," said DeDonato. "The software was designed for human control. Now we'll restructure how the data flows through the system and what talks to what. The way we talk to the robot will have to be rethought."
He added that the team is considering putting all of the robot's balancing and standing algorithms on one on-board processor. "That way, the balancing can run at real time and not be bogged down with other systems taking up resources," said DeDonato.
They may dedicate another onboard processor to the robot's vision, because it's so computationally intensive.
Researchers at Worcester Polytechnic Institute describe how they're going to make their robot more autonomous for the DARPA robotics challenge.
Many of those types of decisions will be made this fall. However, the WPI team has already been making big strides in autonomy.
DeDonato explained that the robot's handlers previously had to give it explicit instructions if they needed the machine to walk forward and open a door or grasp a tool, for instance. Getting the robot to pick up a drill meant the handlers would have to tell it exactly how far to extend its arm, turn its wrist and then close its fingers.
Now they've reached an autonomy milestone where the handler simply tells the robot to pick up the drill.
"We're starting to see it come together," said DeDonato. "We're trying to minimize user input. I think it's a big step forward. This is really along with DARPA's vision. We're monitoring, but we're not making those low-level decisions for the robot. We're trying to condense the commands that go to it."
WPI's robot, Warner, also has achieved a higher level of autonomy when walking.
The robot Warner grasps a wood board on its own as WPI team leader Matt DeDonato observes. In the past, the task would have required multiple keyboard commands and mouse clicks. (Photo: Andrew Baron/Worcester Polytechnic Institute)
Instead of handlers telling the robot to take 2 steps to the right and then eight steps forward, for example. They can now tell the robot to walk across a room and the machine will navigate around or over obstacles to make its own way.
DeDonato said the robot itself is now using much more of the data from its own vision. Instead of simply sending what it "sees" to its handlers, the robot can decipher the information and calculate how to use it.
"We're moving the autonomy up one more level," he noted. "Eventually, the goal is to move to the point where the user doesn't have to be there."
That time, though, won't be anytime soon, according to DeDonato.
"We're probably nowhere near it," he said." We're probably 10 to 20 years at least from full autonomy. The problem is the environment is unknown. To go into a room and assess the damage and decide what actions to take is the Holy Grail. It's not out of the realm of possibility, but that's the movie robot. That's still a long ways off. But we're on the path."
Sharon Gaudin covers the Internet and Web 2.0, emerging technologies, and desktop and laptop chips for Computerworld. Follow Sharon on Twitter at @sgaudin, on Google+ or subscribe to Sharon's RSS feed. Her email address is email@example.com.
Read more about emerging technologies in Computerworld's Emerging Technologies Topic Center.