Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[IROS2023] Exact Point Cloud Downsampling for F...

koide3
June 28, 2024

[IROS2023] Exact Point Cloud Downsampling for Fast and Accurate Global Trajectory Optimization

Exact Point Cloud Downsampling for Fast and Accurate Global Trajectory Optimization
Kenji Koide, Shuji Oishi, Masashi Yokozuka, and Atsuhiko Banno
National Institute of Advanced Industrial Science and Technology (AIST), JAPAN
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2023)

koide3

June 28, 2024
Tweet

More Decks by koide3

Other Decks in Research

Transcript

  1. Exact Point Cloud Downsampling for Fast and Accurate Global Trajectory

    Optimization Kenji Koide, Shuji Oishi, Masashi Yokozuka, and Atsuhiko Banno National Institute of Advanced Industrial Science and Technology (AIST), JAPAN https://staff.aist.go.jp/k.koide/ https://github.com/koide3/caratheodory2
  2. 2 Global Trajectory Optimization Online odometry estimation Global trajectory optimization

    Correcting a trajectory by considering the global consistency of the map A common approach is pose graph optimization (relative pose error minimization) Difficult to close a loop between small overlapping frames Difficult to accurately model and propagate uncertainty of scan matching results
  3. 3 Global Registration Minimization Directly minimizes registration errors over the

    entire map (i.e., multi-scan registration) [Koide, RA-L 2022] Can accurately close a loop between frames with a very small overlap Accurately propagates per-point scan matching uncertainty to pose variables Needs to remember point correspondences for each factor (large memory consumption) Needs to re-evaluate registration errors every optimization iteration (large computation cost)
  4. 4 Point Cloud Downsampling A natural approach to reduce the

    registration error evaluation cost • Random sampling • Geomery-aware sampling [Reijgwart, RA-L 2020] [Wang, RA-L 2022] [Yokozuka, ICRA2021] These sampling approaches introduce approximation errors Trade-off between processing cost (# of points) and estimation accuracy To reduce the points as few as possible while retaining the result as accurate as possible, we propose point cloud downsampling based on efficient and exact coreset extraction
  5. 5 Exact Point Cloud Downsampling Gauss-Newton optimization models an error

    function in the quadratic form Registration error function Information matrix / vector Constant Source / Target points No approximation error at the evaluation point!! We select a weighted subset of input data s.t. it yields the same quadratic function Weights / Residual indices Quadratic function parameters calculated using original set / extracted subset Evaluation point s.t. During optimization, we re-linearize 𝑓𝑅𝐸𝐺 using the selected subset (coreset)
  6. 6 Efficient Coreset Extraction 𝒉1 = ෩ 𝒉𝑎 𝒉2 𝒉3

    = ෩ 𝒉𝑏 𝒉4 = ෩ 𝒉𝑐 𝒉5 𝒉7 𝒉6 𝝁ℎ Hessian matrix 𝑯 can be reconstructed from at most 𝐷2 + 1 rows of Jacobian matrix 𝑱 Caratheodory theorem k-th row of J Selected rows Weights We implemented [Maalouf, NeurIPS 2019] that finds an exact coreset in time 𝑂 𝑁𝐷 and extended it to - reconstruct 𝒃 and 𝑐 in addition to 𝑯 to recover the original quadratic function - control accuracy-vs-speed by changing the number of data to be selected - speed up the algorithm by eliminating non-upper-triangular elements of 𝑯
  7. 7 Downsampling Results Target points (10,000 pts) 𝒫𝑖 Source points

    (10,000 pts) 𝒫𝑗 29 residuals ෨ 𝒫𝑗 29 512 residuals ෨ 𝒫𝑗 512 𝒫𝑗 , ෨ 𝒫𝑗 29, ෨ 𝒫𝑗 512 all yield the same quadratic registration error function for 𝒫𝑖 at the evaluation point Better non-linearity approximation accuracy under displacements compared to random sampling Exact sampling
  8. 8 Optimization time reduction • 4,540 pose variables • 585,417

    registration error factors • Re-linearized registration errors of 10,000 points for each factor at every optimization iteration 13 hours and 22 GB 1.7 hours and 0.25 GB The proposed exact downsampling was evaluated on the KITTI dataset An extremely dense registration error minimization factor graph was created • No downsampling • Exact downsampling 87% and 99% reduction!! Optimization time and memory consumption (128 thread CPU)
  9. 9 Trajectory Estimation Accuracy Large accuracy gain compared to pose

    graph optimization Very small accuracy drop from the result without downsampling