PMSA-DyTr: Prior-Modulated and Semantic-Aligned Dynamic Transformer for Strip Steel Defect Detection

IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS(2024)

Cited 0|Views16
No score
Abstract
In-process hot-rolled strip steel is suffering from some complicated yet unavoidable surface defects due to its harsh production environment. The automated visual inspection on defects consistently faces challenges of interclass similarity, intraclass difference, low contrast, and overlapping issue, which tend to trigger false or missed detections. This article proposes a prior-modulated and semantic-aligned dynamic transformer, called PMSA-DyTr. In this framework, a long short-term self-attention embedded with local convolution is designed for assisting an encoder to eliminate noise ambiguity between defects and backgrounds. Then, a semantic aligner is cleverly bridged between the encoder and the decoder to align the sematic for speeding up the convergence, and prior-modulated cross attention is proposed to alleviate the deficiency of samples for a data-driven transformer. Furthermore, a gate controller is innovatively constructed to dynamically select the minimal number of encoder blocks while preserving detection accuracy. The proposed PMSA-DyTr outperforms 19 state-of-the-art models on mean average precision with an inference time of 54.67 ms and visually performs best in detecting low-contrast and multiple small defects.
More
Translated text
Key words
Automated visual inspection (AVI),encoder-decoder network,salient detection,steel strip,transformer
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined