Simultaneous Localization and Mapping (SLAM) describes the task of map creation and simultaneous localization in the map. SLAM algorithms are used in many areas, including autonomous service robots for indoor scenarios or surveying equipment for outdoor applications of limited GNSS availability. Currently, there are two projects at ifp on the topic of SLAM:
LiDAR Scan Matching
In LiDAR-based SLAM algorithms, scan matching is required to align point clouds captured from different locations. This serves as a prerequisite for determining the respective sensor position and for finally combining multiple scans into a consistent scene. For this purpose scan matching is required for consecutive scans, but is also used to provide so-called loop closures, when the scanner platform revisits known places. Figure 1 shows the SLAM result of a small sequence.
The process is also named point cloud registration. Various scan matching algorithms have been developed over the past decades, but their accuracy and robustness can vary greatly depending on the nature of the environment and sensor configuration. Some methods rely on structured environments to find suitable landmarks, while still other methods rely on good approximations during initialization to converge.
In this project, novel scan matching algorithms for challenging environments are investigated and compared to known methods. Figure 2 shows an example of the merging of two scans with very low overlap using a novel method.
Robust Low-cost Indoor SLAM for Mobile Robots
SLAM is one of the most fundamental capabilities for robots to construct an environment map keep track of their positions in the map. For this purpose a wide range of sensors are available while an increasing number of methods are emerging, which push the boundaries of sensor performance. Therefore, this project aims to compare different low-cost sensors and advanced algorithms of each sensor. Fig. 1 shows the result of an RGBD-based SLAM algorithm.
For the experiment, a low-cost robotic platform (Fig. 2) is assembled, consisting of a 2D Lidar and a stereo camera. To run 2D Lidar SLAM, the Matlab Lidar SLAM and ICP graph SLAM methods are selected. As for visual stereo SLAM, the representative methods which are the ORB-SLAM, the Stereo-DSO, and the DROID-SLAM are evaluated. Additionally, to provide a reference for comparison, an ArUco marker is appended on top of the platform. We employed a wide-view GoPro camera on the room’s ceiling to keep tracking the position and orientation of the robot. The experimental results show that the visual stereo SLAM method i.e. the deep learning-based DROID-SLAM method performs best with an ATE error of 2.9 cm, while the 2D Lidar SLAM results in a 10 cm ATE error. Nevertheless, thanks to the high precision of direct distance measurements, the 2D Lidar-based SLAM provides a more consistent 2D occupancy map and covers more spaces because of the greater measurement range (See Fig. 3a). By contrast, the resulting 3D map (Fig. 3b & 3c) of the visual stereo system contaminates more clutter due to the insufficient accuracy of stereo depth estimate.

Norbert Haala
apl. Prof. Dr.-Ing.Deputy Director

David Skuddis
M.Sc.Ph.D. Student

Wei Zhang
M.Sc.Ph.D. Student