Retiformer: Retinex-Based Enhancement In Transformer For Low-Light Image

ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2023)

Cited 0|Views0
No score
Abstract
Transformer-based methods have shown impressive potential in many low-level vision tasks but are rarely used for low-light image enhancement (LLIE). Direct use of Transformer in LLIE will bring unnatural visual effects. This phenomenon encourages us to attempt to learn from the theory of Retinex. After trial and analysis, we finally propose Retiformer. Retiformer decomposes images into reflectance and illumination attention maps by Retinex Window Self-Attention (R-WSA). It will replace element-wise multiplication with the attention mechanism. By the R-WSA, we respectively apply a Decom-Retiformer block and an Enhance-Retiformer block at the head and tail of a Transformer-based backbone. They can decompose and align the reflection and illumination components just like RetinexNet. With this pipeline, Retiformer combines the advantages of Transformer and Retinex theory and achieves state-of-the-art performance of Retinex-based methods.
More
Translated text
Key words
Transformer,Retinex,low-light image enhancement,self-attention,decomposition
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined