Generation of hyperspectral point clouds: Mapping, compression and rendering

Computers & Graphics(2022)

Cited 4|Views4
No score
Abstract
Hyperspectral data are being increasingly used for the characterization and understanding of real-world scenarios. In this field, UAV-based sensors bring the opportunity to collect multiple samples from different viewpoints. Thus, light-material interactions of real objects may be observed in outdoor scenarios with a significant spatial resolution (5 cm/pixel). Nevertheless, the generation of hyperspectral 3D data still poses challenges in post-processing due to the high geometric deformation of images. Most of the current solutions use both LiDAR (Light Detection and Ranging) and hyperspectral sensors, which are integrated into the same acquisition system. However, these present several limitations due to errors derived from inertial measurements for data fusion and the spatial resolution according to the LiDAR capabilities. In this work, a method is proposed for the generation of hyperspectral point clouds. Input data are formed by push-broom hyperspectral images and 3D point clouds. On the one hand, the point clouds may be obtained by applying a typical photogrammetric workflow or LiDAR technology. Then, hyperspectral images are geometrically corrected and aligned with the RGB orthomosaic. Accordingly, hyperspectral data are ready to be mapped on the 3D point cloud. This procedure is implemented on the GPU by testing which points are visible for each pixel of the hyperspectral imagery. This work also provides a novel solution to generate, compress and render 3D hyperspectral point clouds, enabling the study of geometry and the hyperspectral response of natural and artificial materials in the real world.
More
Translated text
Key words
Hyperspectral,Massively parallel algorithms,Compression,Rendering
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined