Chrome Extension
WeChat Mini Program
Use on ChatGLM

Improved Multiscale Vision Transformers for Classification and Detection

IEEE Conference on Computer Vision and Pattern Recognition(2021)

Cited 31|Views314
No score
Abstract
In this paper, we study Multiscale Vision Transformers (MViT) as a unified architecture for image and video classification, as well as object detection. We present an improved version of MViT that incorporates decomposed relative positional embeddings and residual pooling connections. We instantiate this architecture in five sizes and evaluate it for ImageNet classification, COCO detection and Kinetics video recognition where it outperforms prior work. We further compare MViTs' pooling attention to window attention mechanisms where it outperforms the latter in accuracy/compute. Without bells-and-whistles, MViT has state-of-the-art performance in 3 domains: 88.8% accuracy on ImageNet classification, 56.1 box AP on COCO object detection as well as 86.1% on Kinetics-400 video classification. Code and models will be made publicly available.
More
Translated text
Key words
Recognition: detection,categorization,retrieval,Deep learning architectures and techniques,Representation learning,Video analysis and understanding
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined