There isn't a radio-control handset in sight as a nimble robot briskly weaves itself in and out of the confined tunnels of an underground mine. Powered by ultra-intelligent sensors, the robot intuitively moves and reacts to the changing conditions of the terrain, entering areas unfit for human testing. As it does so, it transmits a detailed 3D map of the entire location to the other side of the world. While this might read like a scenario from a George Orwell novel; it is actually a glimpse into the future of the next generation of robots.
Although earlier research prototypes have shown the principal feasibility of robotics tackling the challenges posed by remote locations or harsh environments, we are only just beginning to see the final pieces the technology puzzle coming together.
According to a recent report by the McKinsey Institute, disruptive technologies like advanced robotics, mobile internet and 3D printing will return between $14 trillion and $33 trillion globally per year by 2025. Many companies already incorporate autonomous technologies to offer better and safer customer experiences.
For service robots this started decades ago with simple stationary devices like garage door openers and has extended to autonomous vacuum cleaners or self driving lawn mowers that are now able to map our garden and cut the lawn in beautiful transects.
The automotive industry is discovering a market for driver assistance systems that now include parking assistance, autonomous driving in ‘stop and go’ traffic and emergency braking. In a recent demonstration by Mercedes-Benz of their ‘self-driving S Class’, they drove the same 60 mile route from Mannheim to Pforzheim that 125 years earlier Bertha Benz drove in the first ever automobile.
The car they used for the experiment looks entirely like a production car and used most of the standard sensors on board, relying on vision and radar to complete the task. However, similar to other autonomous cars, it also used a crucial extra piece of information to make the task feasible: It had access to a detailed 3D-digital map to accurately localize itself in the environment.
Navigating complex and dynamic environments
In these examples, the task (localisation, navigation, obstacle avoidance) is either constrained enough to be solvable or can be solved with the provision of extra information.
There is a third category, where humans and autonomous systems augment each other to solve tasks. This can be highly effective but requires a human remote operator or, depending on real time constraints, a human on stand-by.
The question arises how to realise a robot that can navigate complex and dynamic environments without 3D maps as prior information while keeping the cost and complexity for the device to a minimum. Using as few sensors as possible, it needs to be able to get a consistent picture of its environment and its surroundings to enable it to respond to changing and unknown conditions.
This is, of course, the same question that stood before us at the dawn of robotics research and was addressed in the 1980s and 1990s to deal with spatial uncertainty. However, the decreasing cost of sensors, the increasing computing power of embedded systems and the ability to provide 3D maps, has reduced the importance of answering this key research question.
Combining 3D laser mapping with autonomous robotics systems
In an attempt to refocus on this central question, we tried to stretch the limits of what’s possible with a single sensor, in our case a laser scanner. In 2007, we took a vehicle equipped with laser scanners facing to the left and to the right.
We asked if it was possible to create a 2D map of the surroundings and to localise the vehicle to that same map without using GPS, inertial systems or digital maps. We were able to achieve the goal, including the correction of the map based on loop closures after driving about 100 miles, re-identifying the environment we had already “seen”.
With this encouraging result, we went a step further and developed “Zebedee”. This is a handheld 3D mapping system incorporating a laser scanner that sways on a spring to capture millions of detailed measurements of a site as fast as an operator can walk through it.
While the system does add a simple inertial measurement unit, it still maximises information flow from a very simple and low cost setup. It achieves this by moving the smarts away from the sensor and into the software to compute a continuous trajectory of the sensor, specifying its position and orientation at any time and taking its actual acquisition speed into account to precisely compute a 3D point cloud.