Deep Surface Reconstruction from Point Clouds with Visibility Information.

ICPR(2022)

Cited 4|Views10
No score
Abstract
Most current neural networks for reconstructing surfaces from point clouds ignore sensor poses and only operate on raw point locations. Sensor visibility, however, holds meaningful information regarding space occupancy and surface orientation. In this paper, we present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation. Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains. Our code and data is available at https://github.com/raphaelsulzer/dsrv-data.
More
Translated text
Key words
current neural networks,deep surface reconstruction,generated surfaces,meaningful information regarding space occupancy,point clouds,point locations,sensor poses,sensor visibility,surface orientation,surface reconstruction networks,visibility information
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined