Accurate and Resource-Efficient Lipreading with Efficientnetv2 and Transformers

IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)(2022)

引用 15|浏览20
暂无评分
摘要
We present a novel resource-efficient end-to-end architecture for lipreading that achieves state-of-the-art results on a popular and challenging benchmark. In particular, we make the following contributions: First, inspired by the recent success of the EfficientNet architecture in image classification and our earlier work on resource-efficient lipreading models (MobiLipNet), we introduce Efficient-Nets to the lipreading task. Second, we show that the currently most popular in the literature 3D front-end contains a max-pool layer that prohibits networks from reaching superior performance and propose its removal. Finally, we improve our system's back-end robustness by including a Transformer encoder. We evaluate our proposed system on the "Lipreading In-The-Wild" (LRW) corpus, a database containing short video segments from BBC TV broadcasts. The proposed network (T-variant) attains 88.53% word accuracy, a 0.17% absolute improvement over the current state-of-the-art, while being five times less computationally intensive. Further, an up-scaled version of our model (L-variant) achieves 89.52%, a new state-of-the-art result on the LRW corpus.
更多
查看译文
关键词
EfficientNet,Transformers,Lipreading
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要