Simultaneous Localization and Mapping

Simultaneous Localization and Mapping (SLAM) describes the task of map creation and simultaneous localization in the map. SLAM algorithms are used in many areas, including autonomous service robots for indoor scenarios or surveying equipment for outdoor applications of limited GNSS availability. Currently, there are two projects at ifp on the topic of SLAM:


LiDAR Scan Matching

In LiDAR-based SLAM algorithms, scan matching is required to align point clouds captured from different locations. This serves as a prerequisite for determining the respective sensor position and for finally combining multiple scans into a consistent scene. For this purpose scan matching is required for consecutive scans, but is also used to provide so-called loop closures, when the scanner platform revisits known places. Figure 1 shows the SLAM result of a small sequence.

Figure 1 Composite scene as a result of a SLAM algorithm. Red dots indicate keyframe positions and yellow lines indicate loop closures.

The process is also named point cloud registration. Various scan matching algorithms have been developed over the past decades, but their accuracy and robustness can vary greatly depending on the nature of the environment and sensor configuration. Some methods rely on structured environments to find suitable landmarks, while still other methods rely on good approximations during initialization to converge.

In this project, novel scan matching algorithms for challenging environments are investigated and compared to known methods. Figure 2 shows an example of the merging of two scans with very low overlap using a novel method.

Figure 2 Scan matching example. Left: Initialization. Right: Result after scan matching.

In addition to the LiDAR sensor, the sensor system is also equipped with GNSS receivers. The goal of the project is to develop a system whose trajectory can be reconstructed and georeferenced even in very difficult environments where GNSS is temporarily unavailable.


Robust Low-cost Indoor SLAM for Mobile Robots

Figure 1: Densely reconstructed IFP office overlaid with the estimated moving trajectory of the employed robot (green lines).

SLAM is one of the most fundamental capabilities for robots to construct an environment map keep track of their positions in the map. For this purpose a wide range of sensors are available while an increasing number of methods are emerging, which push the boundaries of sensor performance. Therefore, this project aims to compare different low-cost sensors and advanced algorithms of each sensor. Fig. 1 shows the result of an RGBD-based SLAM algorithm.

Figure 2: The low-cost robotic platform and the module diagram.

For the experiment, a low-cost robotic platform (Fig. 2) is assembled, consisting of a 2D Lidar and a stereo camera. To run 2D Lidar SLAM, the Matlab Lidar SLAM and ICP graph SLAM methods are selected. As for visual stereo SLAM, the representative methods which are the ORB-SLAM, the Stereo-DSO, and the DROID-SLAM are evaluated. Additionally, to provide a reference for comparison, an ArUco marker is appended on top of the platform. We employed a wide-view GoPro camera on the room’s ceiling to keep tracking the position and orientation of the robot. The experimental results show that the visual stereo SLAM method i.e. the deep learning-based DROID-SLAM method performs best with an ATE error of 2.9 cm, while the 2D Lidar SLAM results in a 10 cm ATE error. Nevertheless, thanks to the high precision of direct distance measurements, the 2D Lidar-based SLAM provides a more consistent 2D occupancy map and covers more spaces because of the greater measurement range (See Fig. 3a). By contrast, the resulting 3D map (Fig. 3b & 3c) of the visual stereo system contaminates more clutter due to the insufficient accuracy of stereo depth estimate.

Figure 3: Different map representations, which are 2D Lidar map, 3D point cloud map, and 3D dense volumetric map respectively.
This image shows Norbert Haala

Norbert Haala

apl. Prof. Dr.-Ing.

Deputy Director

This image shows David Skuddis

David Skuddis


Ph.D. Student

This image shows Wei Zhang

Wei Zhang


Ph.D. Student

To the top of the page