A cross-modal crowd counting method combining CNN and cross-modal transformer

Image and Vision Computing(2023)

引用 0|浏览17
暂无评分
摘要
Cross-modal crowd counting aims to use the information between different modalities to generate crowd density images, so as to estimate the number of pedestrians more accurately in unconstrained scenes. Due to the huge differences between different modal images, how to effectively fuse the information between different modali-ties is still a challenging problem. To address this problem, we propose a cross-modal crowd counting method based on CNN and novel cross-modal transformer, which effectively fuses the information between different mo-dalities and boosts the accuracy of crowd counting in unconstrained scenes. Concretely, we first design double CNN branches to capture the modality-specific features of images. After that, we design a novel cross-modal transformer to extract cross-modal global features from the modality-specific features. Furthermore, we a pro-pose cross layer connection structure to connect the front-end information and back-end information of the net-work by adding different layer features. At the end of the network, we develop a cross-modal attention module to strengthen the cross-modal feature representation by extracting the complementarities between different modal features. The experimental results show that the method combining CNN and novel cross-modal trans-former proposed in this paper achieves state-of-the-art performance, which not only effectively improves the accuracy and robustness of cross-modal crowd counting, but also has good generalization under multimodal crowd counting.(c) 2022 Elsevier B.V. All rights reserved.
更多
查看译文
关键词
Cross -modal crowd counting,CNN,Transformer,Cross layer connection structure,Cross -modal attention module
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要