SLAM: Simultaneous Localization and Mapping
SLAM is an acronym for simultaneous localization and mapping, a technology whereby a robot or a device can create a map of its surroundings and orient itself properly within the map in real time. This is no easy task, and it currently exists at the frontiers of technology research and design. A big roadblock to successfully implementing SLAM technology is the chicken-and-egg problem introduced by the two required tasks. To successfully map an environment, you must know your orientation and position within it; however, this information is only gained from a pre-existing map of the environment.
How SLAM Works
SLAM technology typically overcomes this complex chicken-and-egg issue by building a pre-existing map of an environment using GPS data. This map is then refined as the robot or device moves through the environment. The true challenge of the technology is one of accuracy. Measurements must constantly be taken as the robot or device moves through space, and the technology must take into account the “noise” that is introduced by both the movement of the device and the inaccuracy of the measurement method. This makes SLAM technology largely a matter of measurement and mathematics.
Measurement and Mathematics
Google’s self-driving car is an example of measurement and mathematics in action. The car primarily takes measurements using the roof-mounted LIDAR (laser radar) assembly, which can create a 3D map of its surroundings up to 10 times a second. This frequency of evaluation is critical as the car moves at speed. These measurements are used to augment the pre-existing GPS maps, which Google is well known for maintaining as part of its Google Maps service. The readings create a massive amount of data, and generating meaning from this data to make driving decisions is the work of statistics. The software on the car uses advanced statistics, including Monte Carlo models and Bayesian filters to accurately map the environment.
Implications for Augmented Reality
Autonomous vehicles are the obvious primary application of SLAM technology. However, a less obvious use may be in the world of wearable technologies and augmented reality. While Google Glass can use GPS data to provide a rough position of the user, a similar future device could use SLAM technology to build a much more complex map of the user’s environment. This could include an understanding of precisely what the user is looking at with the device. It could recognize when a user is looking at a landmark, storefront, or advertisement, and use that information to provide an augmented reality overlay. While these features may sound a long way off, an MIT project has developed one of the first examples of a wearable SLAM technology device.
Tech That Understands Space
It was not long ago that technology was a fixed, stationary terminal that we use in our homes and offices. Now technology is ever-present and mobile. This trend is sure to continue as tech continues to miniaturize and become entwined in our daily activities. It is because of these trends that SLAM technology is becoming increasingly important. It won’t be long before we expect our tech to not only understand our surroundings as we move but also to pilot us through our day-to-day lives.