Distortion Correction of Depth Data from Consumer Depth Cameras

Proceedings of the 4th International Conference on 3D Body Scanning Technologies, Long Beach CA, USA, 19-20 November 2013(2013)

Cited 30|Views10
No score
Abstract
Since the introduction of the Microsoft Kinect in November 2010, low cost consumer depth cameras have rapidly increased in popularity. Their integral technology provides a means of low cost 3D scanning, extending its accessibility to a far wider audience. Previous work has shown the 3D data from consumer depth cameras to exhibit fundamental measurement errors: likely due to their low cost and original intended application. A number of techniques to correct the errors are presented in the literature, but are typically device specific, or rely on specific open source drivers. Presented here is a simple method of calibrating consumer depth cameras, relying only on 3D scans of a plane filling the field of view: thereby compatible with any device capable of providing 3D point cloud data. Validation of the technique using a Microsoft Kinect sensor has shown non planarity errors to reduce to around ± 3mm: nearing the device’s resolution. Further validation based on circumference measures of a cylindrical object has shown a variable error of up to 45mm to reduce to a systematic overestimation of 10mm, based on a 113mm diameter cylinder. Further work is required to test the proposed method on objects of greater complexity and over greater distances. However, this initial work suggests great potential for a simple method of reducing the error apparent in the 3D data from consumer depth cameras: possibly increasing their suitability for a number of applications.
More
Translated text
Key words
depth data
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined