Depth Dependence Removal in RGB-D Semantic Segmentation Model

2023 IEEE International Conference on Mechatronics and Automation (ICMA)(2023)

引用 0|浏览2
暂无评分
摘要
RGB-D semantic segmentation is gaining increasing attention as it provides greater accuracy than traditional RGB semantic segmentation. The key idea of RGB-D semantic segmentation is to train a convolutional neural network (CNN) model using RGB-D images. Generally, the model can efficiently segment RGB images only when depth information is available. However, in reality, most cameras can only capture RGB channels, which presents a difficulty in accurately segmenting RGB images without depth. To solve the problem, a depth dependence removal method is proposed for RGB-D semantic segmentation model. The method adopts a mechanism of using simulated depth instead of real depth for semantic segmentation, which can not only make the model get rid of the dependence on real depth, but also maintain the accuracy advantage of the model. First, in the training phase, we utilize the depth relationship between different pixels in local area to build a depth similarity function, and use the function to boost convolution and pooling of CNN for achieving the accuracy improvement. Second, we construct an optimization function to seek simulated depth information from RGB images based on the depth similarity function. Finally, we employ the simulated depth to replace real depth for semantic segmentation, so as to remove the depth dependence of CNN. We apply the method to NYUv2 and SUN RGB-D datasets. The final results indicate that the proposed depth dependence removal method can achieve favorable segmentation for RGB images.
更多
查看译文
关键词
RGB-D semantic segmentation,depth dependence removal.
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要