Transientangelo:
Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar

1 University of Toronto 2 Vector Institute
  architecture

Abstract

We consider the problem of few-viewpoint 3D surface reconstruction using raw measurements from a lidar system. Lidar captures 3D scene geometry by emitting pulses of light to a target and recording the speed-of-light time delay of the reflected light. However, conventional lidar systems do not output the raw, captured waveforms of backscattered light; instead, they pre-process these data into a 3D point cloud. Since this procedure typically does not accurately model the noise statistics of the system, exploit spatial pri- ors, or incorporate information about downstream tasks, it ultimately discards useful information that is encoded in raw measurements of backscattered light. Here, we propose to leverage raw measurements captured with a single-photon lidar system from multiple viewpoints to optimize a neu- ral surface representation of a scene. The measurements consist of time-resolved photon count histograms, or tran- sients, which capture information about backscattered light at picosecond time scales. Additionally, we develop new regularization strategies that improve robustness to photon noise, enabling accurate surface reconstruction with as few as 10 photons per pixel. Our method outperforms other tech- niques for few-viewpoint 3D reconstruction based on depth maps, point clouds, or conventional lidar as demonstrated in simulation and with captured data


Method

  architecture

Given few lidar (less than 5) scans of a scene, we optimize a surface-based representation that learns to reconstruct the 3D scene. We propose a weight variance regularization technique over the unseen viewpoints (bottom right blue box) that helps improve the overall quality of the reconstruction and a reflectivity loss (top right orange box; second term) that prevents divergence and ameliorates the results in the low-photon cases. Overall, our method achieves SOTA performance across both the simulated and captured dataset and is robust to as few as 10 photons per pixel (ppp).


Surface Reconstruction

MonoSDF-M

TransientNeRF

Ours


Simulated Results

MonoSDF-M

TransientNeRF

Ours

Two views

Three views

Five views


Captured Results

MonoSDF-M

TransientNeRF

Ours

Two views

Three views

Five views


Low Photon Results

TransientNeRF

Ours

TransientNeRF

Ours

10ppp

50ppp

150ppp

300ppp

In the low photon setting, our method also shows superior performance in terms of image metrics. TransientNeRF exhibits color inconsistency as the photon level decreases, while our method is fairly robust.


Citation

@article{luo2024transientangelo,
  author = {Luo, Weihan and Malik, Anagh and Lindell, David B.},
  title = {Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar},
  journal = {arXiv},
  year = {2024}
}