JSSE: Joint Sequential Semantic Encoder for Zero-Shot Event Recognition

IEEE Transactions on Artificial Intelligence(2023)

引用 0|浏览4
暂无评分
摘要
Zero-shot learning (ZSL) is a paradigm in transfer learning that aims to recognize unknown categories by having a mere description of them. The problem of ZSL has been thoroughly studied in the domain of static object recognition; however, ZSL for dynamic events (zero-shot event recognition, ZSER) such as activities and gestures has hardly been investigated. In this context, this article addresses ZSER by relying on semantic attributes of events to transfer the learned knowledge from seen classes to unseen ones. First, we utilized the Amazon Mechanical Turk platform to create the first attribute-based gesture dataset, referred to as zero shot gestural learning (ZSGL), comprising the categories present in MSRC and Italian gesture datasets. Overall, our ZSGL dataset consisted of 26 categories, 65 discriminative attributes, and 16 attribute annotations and 400 examples per category. We used trainable recurrent networks and 3-D convolutional neural networks (CNNs) to learn the spatiotemporal features. Next, we propose a simple yet effective end-to-end approach for ZSER, referred to as joint sequential semantic encoder (JSSE), to explore temporal patterns, to efficiently represent events in the latent space, and to simultaneously optimize for both the semantic and classification tasks. We evaluate our model on ZSGL and two action datasets (UCF and HMDB), and compared the performance of JSSE against several existing baselines under four experimental conditions: 1) within-category , 2) across-category , 3) closed-set , and 4) open-set . Results show that JSSE considerably outperforms ( $p< 0.05$ ) other approaches and performs favorably for both the datasets under all experimental conditions.
更多
查看译文
关键词
Action and gesture recognition,activity,semantic descriptors,transfer learning,zero-shot learning (ZSL)
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要