Upgrade to Pro — share decks privately, control downloads, hide ads and more …

Point2color: 3D Point Cloud Colorization Usin...

teddy
June 12, 2021

Point2color: 3D Point Cloud Colorization Using a Conditional Generative Network and Differentiable Rendering for Airborne LiDAR

CVPR workshop "Earth Vision 2021"
Title: Point2color: 3D Point Cloud Colorization Using a Conditional Generative Network and Differentiable Rendering for Airborne LiDAR

Author: Takayuki Shinohara, Haoyi Xiu, and Masashi Matsuoka
(Tokyo Institute of Technology)

Abstract: Airborne LiDAR observations are very effective for providing accurate 3D point clouds, and archived data are becoming available to the public.
In many cases, only geometric information is available in the published 3D point cloud observed by airborne LiDAR (airborne 3D point cloud), and geometric information alone is not readable.
Thus, it is important to colorize airborne 3D point clouds to improve visual readability.
A scheme for 3D point cloud colorization using a conditional generative adversarial network (cGAN) was proposed, but it is difficult to apply to airborne LiDAR because the method is for artificial CAD models.
Since airborne 3D point clouds are spread over a wider area than simple CAD models, it is important to evaluate them spatially in two-dimensional (2D) images.
Currently, the differentiable renderer is the most reliable method to bridge 3D and 2D images.
In this paper, we propose an airborne 3D point cloud colorization scheme called point2color using cGAN with points and rendered images.
To achieve airborne 3D point cloud colorization, we estimate the color of each point with PointNet++ and render the estimated colored airborne 3D point cloud into a 2D image with a differentiable renderer.
The network is then trained by minimizing the distance between real color and colorized fake color.
The experimental results demonstrate the effectiveness of point2color using the IEEE GRSS 2018 Data Fusion Contest dataset with lower error than previous studies.
Furthermore, an ablation study demonstrates the effectiveness of using a cGAN pipeline and 2D images via a differentiable renderer.
Our code will be available at \href{https://github.com/shnhrtkyk/point2color}{GitHub}.

teddy

June 12, 2021
Tweet

More Decks by teddy

Other Decks in Research

Transcript

  1. 1 Point2color: 3D Point Cloud Colorization Using a Conditional Generative

    Network and Differentiable Rendering for Airborne LiDAR Takayuki Shinohara, Haoyi Xiu, and Masashi Matsuoka Tokyo Institute of Technology 19/June/2021 Online, EarthVision2021
  2. 2 Tokyo Tech Point2color l3D Point Cloud colorization task n

    Estimating the color of each points from geometric 3D Point Clouds observed by airborne LiDAR Point2color Colorization Input: Point Cloud (x,y,z) Output: Colored Point Cloud (x,y,z,R,G,B) Low visual readability High visual readability
  3. 4 Tokyo Tech Public Open 3D Point Cloud lEasy to

    access 3D Point Clouds n Open Topography n Association for Promotion of Infrastructure Geospatial Information Distribution(AIGID) To improve the visual readability of point clouds when only geometric data is available, we developed a colorization method. Many open data have only geometric information. Making the visual readability of point clouds a problem.
  4. 5 Tokyo Tech lConditional GAN-based Colorization n Image Colorization methods

    l Realistic colorization results from actual images n Point Colorization method[Liu et.al, 2019] l Only for simple CAD data lDifferentiable Rendering n Projecting Point Cloud onto 2D images using differentiable rendering. We propose a cGAN-based colorization model (Point2color) using raw Point Cloud and image from differentiable rendering. Related Studies
  5. 7 Tokyo Tech Overall Colorization Strategy lcGAN-based pipeline Colorized Fake

    points 𝑪𝒇𝒂𝒌𝒆 Input Data 𝑷 PointNet++ PointNet++ Real Points 𝑪𝒓𝒆𝒂𝒍 Real/Fake Generator Point Cloud Discriminator Colorized Fake Image 𝑰𝒇𝒂𝒌𝒆 Real Image 𝑰𝒓𝒆𝒂𝒍 CNN Real/Fake Image Discriminator Differentiable Rendering
  6. 8 Tokyo Tech Network: Generator lPointNet++[Qi et al. 2017]-based x,y,z

    N Input Patch R,G,B N Skip connection (concatenate) 8,192 4,096 2,048 8,192 4,096 2,048 Fake Color Dow nsam pling Convolution U psam pling Convolution Generator estimates color of each points using PointNet++ with Encode-Decoder
  7. 9 Tokyo Tech Network: Point Discriminator lPointNet++[Qi et al. 2017]-based

    x,y,z,R,G,B N Input Patch Prob. Real 8,192 4,096 2,048 1DCNN Downsampling ( ) Sampling Grouping Fake Colored Points Real Colored Points Judge fake or real
  8. 10 Tokyo Tech Network: Image Discriminator lPix2Pix[Isola. 2017]-based W H

    Input Image Patch Convolution Prob. Real Fake Image Real Image Fake Colored Points Real Colored Points Differentiable rendering Judge fake or real
  9. 11 Tokyo Tech Optimization lRegression: L1 distance of RGB 𝐿$%

    &'()* = 𝔼 𝑪+,-. −𝑪/.,0 %, 𝐿$% (1,2. = 𝔼 𝑰+,-. −𝑰/.,0 % lGAN: Wasserstein distance 𝐿3 &'()* = −𝔼 𝐷4(𝑪+,-.) , 𝐿3 (1,2. = −𝔼 𝐷5(𝑰+,-.) 𝐿6 &'()* = 𝔼 𝐷4 (𝑪+,-. ) - 𝔼 𝐷4 (𝑪/.,0 ) 𝐿6 (1,2. = 𝔼 𝐷5(𝑰+,-.) - 𝔼 𝐷5(𝑰/.,0) l Total loss 𝐿3 = 𝐿3 &'()*+ 𝐿3 (1,2.+ 𝜆𝐿$% &'()*+ 𝜆𝐿$% (1,2. 𝐿6 = 𝐿6 &'()*+ 𝐿6 (1,2.
  10. 13 Tokyo Tech Experimental Data lGRSS Data Fusion Contest 2018

    n Airborne LiDAR and aerial photo data l Target Area l urban area l GT Color l From Aerial photo l Preprocess l Isolated points removing l Training Patch l 25 m2 with 5 m buffer l 4,096 points l 1,000 patches Training Patch 30 m 30 m Target Area 25 m 25 m Example of Point Cloud
  11. 14 Tokyo Tech Colorized Point Cloud Proposed colorization method generated

    better results than previous models. The color of small objects were ignored. ) ! !"#$ !%$"& " !"#$ ( "%$"& Previous Method Point2color GT Input Non-vivid colors Vivid Rendered Image MAE=0.25 MAE=0.22 MAE=0.1 😃 😫
  12. 16 Tokyo Tech Conclusion and Future Work l Conclusion n

    We propose a colorization model (point2color) for Point Cloud observed by airborne LiDAR using cGAN. n We combined two discriminators for Point Cloud and 2D image via differentiable rendering. n Generated color have shown more realistic color and lower MAE than previous model. l Future work n Limited test data Generalization performance using various test data n Only MAE evaluation Evaluating Segmentation performance