LiDAR Road-Atlas: An Efficient Map Representation for General 3D Urban Environment
In this work, we propose the LiDAR Road-Atlas, a compactable and efficient 3D map representation, for autonomous robot or vehicle navigation in general urban environment. The LiDAR Road-Atlas can be generated by an online mapping framework based on incrementally merging local 2D occupancy grid maps (2D-OGM). Specifically, the contributions of our LiDAR Road-Atlas representation are threefold. First, we solve the challenging problem of creating local 2D-OGM in non-structured urban scenes based on a real-time delimitation of traversable and curb regions in LiDAR point cloud. Second, we achieve accurate 3D mapping in multiple-layer urban road scenarios by a probabilistic fusion scheme. Third, we achieve very efficient 3D map representation of general environment thanks to the automatic local-OGM induced traversable-region labeling and a sparse probabilistic local point-cloud encoding. Given the LiDAR Road-Atlas, one can achieve accurate vehicle localization, path planning and some other tasks. Our map representation is insensitive to dynamic objects which can be filtered out in the resulting map based on a probabilistic fusion. Empirically, we compare our map representation with a couple of popular map representation methods in robotics and autonomous driving societies, and our map representation is more favorable in terms of efficiency, scalability and compactness. In addition, we also evaluate localization accuracy extensively given the created LiDAR Road-Atlas representations on several public benchmark datasets. With a 16-channel LiDAR sensor, our method achieves an average global localization errors of 0.26m (translation) and 1.07 degrees (rotation) on the Apollo dataset, and 0.89m (translation) and 1.29 degrees (rotation) on the MulRan dataset, respectively, at 10Hz, which validates the promising performance of our map representation for autonomous driving.
READ FULL TEXT