LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and
Environment Analysis
LPD-Net: 3D Point Cloud Learning for Large-Scale Place Recognition and Environment Analysis
Zhe Liu Shunbo Zhou Chuanzhe Suo Yingtian Liu Peng Yin Hesheng Wang Yun-Hui Liu

Abstract
Point cloud based place recognition is still an open issue due to thedifficulty in extracting local features from the raw 3D point cloud andgenerating the global descriptor, and it's even harder in the large-scaledynamic environments. In this paper, we develop a novel deep neural network,named LPD-Net (Large-scale Place Description Network), which can extractdiscriminative and generalizable global descriptors from the raw 3D pointcloud. Two modules, the adaptive local feature extraction module and thegraph-based neighborhood aggregation module, are proposed, which contribute toextract the local structures and reveal the spatial distribution of localfeatures in the large-scale point cloud, with an end-to-end manner. Weimplement the proposed global descriptor in solving point cloud based retrievaltasks to achieve the large-scale place recognition. Comparison results showthat our LPD-Net is much better than PointNetVLAD and reaches thestate-of-the-art. We also compare our LPD-Net with the vision-based solutionsto show the robustness of our approach to different weather and lightconditions.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| 3d-place-recognition-on-cs-campus3d | LPD-Net | AR@1: 45.94 AR@1 cross-source: 11.99 AR@1%: 59.49 AR@1% cross-source: 40.70 |
| 3d-place-recognition-on-oxford-robotcar | LPD-Net | AR@1: 86.3 AR@1%: 94.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.