RTHEN: Unsupervised deep homography estimation based on dynamic attention for repetitive texture image stitching

DISPLAYS(2024)

Cited 0|Views7
No score
Abstract
Homography estimation is regarded as one of the key challenges in image alignment, where the goal is to estimate the projective transformation between two images on the same plane. Unsupervised learning methods are gradually becoming popular due to their excellent performance and lack of need for labeled data. However, in regional scenes with repeated textures, there may be ambiguity in the correspondence between local features, affecting homography estimation accuracy. This paper proposes a new unsupervised deep homography method RTHEN to solve such problems. In order to effectively obtain repeated texture features, a multi -scale Feature pyramid Siamese network (FPSN) is designed. Specifically, we dynamically allocate the weights of recited texture features through a dynamic attention module and introduce a channel attention module to provide rich contextual information for repeated texture areas. We propose a hard triplet loss function based on overlap constraints to optimize the matching results. At the same time, we collected a repetitive texture image dataset (RTID) for homography estimation training and evaluation. Experimental results show that our method outperforms existing learning methods in repetitive texture scenes and offers competitive performance with state-of-the-art traditional methods.
More
Translated text
Key words
Homography estimation,Repetitive textures,Deep learning,Dynamic attention,Triplet loss
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined