Upgrade to Pro — share decks privately, control downloads, hide ads and more …

[IROS2020] Non-overlapping RGB-D Camera Network...

koide3
June 28, 2024

[IROS2020] Non-overlapping RGB-D Camera Network Calibration with Monocular Visual Odometry

Non-overlapping RGB-D Camera Network Calibration with Monocular Visual Odometry
Kenji Koide and Emanuele Menegatti
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS2020)

koide3

June 28, 2024
Tweet

More Decks by koide3

Other Decks in Research

Transcript

  1. Non-overlapping RGB-D Camera Network Calibration with Monocular Visual Odometry Kenji

    Koide* and Emanuele Menegatti** * National Institute of Advanced Industrial Science and Technology , Japan ** University of Padova, Italy Code available at: https://github.com/koide3/sparse_dynamic_calibration
  2. Camera network-based HMI framework http://openptrack.org/ UCLA Remap Used for R&D

    and education projects - People position and skeleton tracking - Face and gesture recognition - Arbitrary object tracking
  3. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  4. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  5. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  6. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  7. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  8. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  9. Traditional Calibration Method M. Munaro et al., “OpenPTrack: Open source

    multi-camera calibration and people tracking for RGB-D camera networks”, RAS 2016 [Munaro 2016] 1. Print out a large chessboard 2. Show the board to several cameras 3. Move the board to a new position 4. Repeat 2 and 3 5. Optimize the pose graph
  10. Problems 1. Limited camera arrangement and calibration accuracy 2. Requires

    large overlap between cameras 3. Impossible to calibrate non-overlapping cameras
  11. Problems 1. Limited camera arrangement and calibration accuracy 2. Requires

    large overlap between cameras 3. Impossible to calibrate non-overlapping cameras A few meters distance CAM1 CAM2 A0 pattern becomes very small in a distant camera view • Distance between cameras must be < 5m • Deteriorated PnP pose estimation accuracy
  12. Problems 1. Limited camera arrangement and calibration accuracy 2. Requires

    large overlap between cameras 3. Impossible to calibrate non-overlapping cameras A few meters distance CAM1 CAM2 A0 pattern becomes very small in a distant camera view • Distance between cameras must be < 5m • Deteriorated PnP pose estimation accuracy
  13. Problems 1. Limited camera arrangement and calibration accuracy 2. Requires

    large overlap between cameras 3. Impossible to calibrate non-overlapping cameras A few meters distance CAM1 CAM2 A0 pattern becomes very small in a distant camera view • Distance between cameras must be < 5m • Deteriorated PnP pose estimation accuracy Overlap (Accuracy) Coverage Tradeoff Small overlap results in limited pattern pose variation Worse calibration accuracy
  14. Problems 1. Limited camera arrangement and calibration accuracy 2. Requires

    large overlap between cameras 3. Impossible to calibrate non-overlapping cameras A few meters distance CAM1 CAM2 A0 pattern becomes very small in a distant camera view • Distance between cameras must be < 5m • Deteriorated PnP pose estimation accuracy Overlap (Accuracy) Coverage Tradeoff Small overlap results in limited pattern pose variation Worse calibration accuracy All cameras must have common view Inter-room camera arrangement is difficult
  15. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘
  16. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘
  17. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 Landmark-based Pose Graph SLAM
  18. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 Landmark-based Pose Graph SLAM
  19. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 Landmark-based Pose Graph SLAM
  20. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 Landmark-based Pose Graph SLAM
  21. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 Landmark-based Pose Graph SLAM
  22. VO-based Online Calibration We use visual odometry on a dynamic

    support camera to “bridge” separated camera views 1. Place fiducial tags (e.g., Apriltag) 2. Show tags to the dynamic camera 3. Estimate 𝐶𝑖 , 𝑀𝑗 , 𝑉𝑘 𝐶0 𝐶1 𝐶2 𝑀0 𝑀1 𝑀2 𝑉0 𝑉𝑡 Landmark-based Pose Graph SLAM
  23. Inter-room Camera Network Calibration Visual odometry Fiducial tags Graph optimization

    : Direct Sparse Odometry [Engel, 2017] : Apriltag2 [Wang, 2016] : Levenberg-Marquardt in g2o [Kuemmerle, 2011] Start
  24. Depth-based Calibration Refinement Online calibration bridges separated cameras But calibration

    accuracy relies on VO... can be deteriorated in feature-less env Online calibration bridges separated cameras
  25. Depth-based Calibration Refinement Online calibration bridges separated cameras But calibration

    accuracy relies on VO... can be deteriorated in feature-less env Online calibration bridges separated cameras Use depth information to refine the result
  26. Depth-based Calibration Refinement Online calibration bridges separated cameras But calibration

    accuracy relies on VO... can be deteriorated in feature-less env Online calibration bridges separated cameras • Floor plane constraint • ICP constraint Use depth information to refine the result
  27. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel)
  28. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel)
  29. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel)
  30. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel) ICP constraint - Add constraints between closest points - Optimize the graph - Remove ICP constraints - Repeat the above process
  31. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel) ICP constraint - Add constraints between closest points - Optimize the graph - Remove ICP constraints - Repeat the above process
  32. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel) ICP constraint - Add constraints between closest points - Optimize the graph - Remove ICP constraints - Repeat the above process
  33. Depth-based Constraints Floor plane constraint - Detect the floor plane

    using RANSAC - Align the camera poses such that the detected floor planes become identical (or parallel) ICP constraint - Add constraints between closest points - Optimize the graph - Remove ICP constraints - Repeat the above process Constraints created in the preceded steps are kept No large overlap is required
  34. Simulation Experiment Gazebo simulator Lenz distortion noise is injected to

    simulate deteriorated visual odometry Arrangement1 Arrangement2
  35. Simulation Experiment Separated camera groups are bridged by VO vertices

    via tag vertices Depth-based refinement results Before After Constructed pose graph
  36. Experiment in a real environment GT measured using FARO Arrangement1

    • ~ 6mm accuracy achieved • Traditional one deteriorated with small overlap • Proposed method retained the accuracy
  37. Experiment in a real environment Arrangement1 (Small overlap) • ~

    6mm accuracy achieved • Traditional one deteriorated with small overlap • Proposed method retained the accuracy GT measured using FARO
  38. Experiment in a real environment Arrangement1 (Small overlap) • ~

    6mm accuracy achieved • Traditional one deteriorated with small overlap • Proposed method retained the accuracy GT measured using FARO Traditional Proposed w/o refinement Proposed w/ refinement Green Orange : Table surface : Floor surface
  39. Experiment in a real environment Arrangement1 • ~ 6mm accuracy

    achieved • Traditional one deteriorated with small overlap • Proposed method retained the accuracy Arrangement2 (Inter-room) • Traditional one is no longer able to calibrate • Proposed method showed a high accuracy thanks to the VO-based bridging • Refinement step further improved accuracy GT measured using FARO