TransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic Token Mixer for Visual Recognition
CoRR(2023)
摘要
Recent studies have integrated convolution into transformers to introduce
inductive bias and improve generalization performance. However, the static
nature of conventional convolution prevents it from dynamically adapting to
input variations, resulting in a representation discrepancy between convolution
and self-attention as self-attention calculates attention matrices dynamically.
Furthermore, when stacking token mixers that consist of convolution and
self-attention to form a deep network, the static nature of convolution hinders
the fusion of features previously generated by self-attention into convolution
kernels. These two limitations result in a sub-optimal representation capacity
of the constructed networks. To find a solution, we propose a lightweight Dual
Dynamic Token Mixer (D-Mixer) that aggregates global information and local
details in an input-dependent way. D-Mixer works by applying an efficient
global attention module and an input-dependent depthwise convolution separately
on evenly split feature segments, endowing the network with strong inductive
bias and an enlarged effective receptive field. We use D-Mixer as the basic
building block to design TransXNet, a novel hybrid CNN-Transformer vision
backbone network that delivers compelling performance. In the ImageNet-1K image
classification task, TransXNet-T surpasses Swin-T by 0.3\% in top-1 accuracy
while requiring less than half of the computational cost. Furthermore,
TransXNet-S and TransXNet-B exhibit excellent model scalability, achieving
top-1 accuracy of 83.8\% and 84.6\% respectively, with reasonable computational
costs. Additionally, our proposed network architecture demonstrates strong
generalization capabilities in various dense prediction tasks, outperforming
other state-of-the-art networks while having lower computational costs.
更多查看译文
关键词
token mixer,recognition,local dynamics,dual dynamics
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要