Continual Egocentric Activity Recognition with Foreseeable-Generalized Visual-IMU Representations

IEEE Sensors Journal(2024)

引用 0|浏览2
暂无评分
摘要
The rapid advancement of wearable sensors has significantly facilitated data collection in our daily lives. Human Activity Recognition (HAR), a prominent research area in wearable technology, has made substantial progress in recent years. However, the existing efforts often overlook the issue of functional scalability in models, making it challenging for deep models to adapt to application scenarios that require continuous evolution. Furthermore, when employing conventional continual learning techniques, we’ve observed an imbalance between visual-based and inertial-measurement-unit (IMU) sensing modalities during joint optimization, which hampers model generalization and poses a significant challenge. To obtain a generalized representation more adapted to continual tasks, we propose a motivational optimization scheme to address the limited generalization caused by the modal imbalance, enabling foreseeable generalization in a visual-IMU multimodal network. To prevent the forgetting of previously learned activities, we introduce a robust representation estimation technique and a pseudo representation generation strategy for continual learning. Experimental results on the egocentric activity dataset UESTC-MMEA-CL demonstrate the effectiveness of our proposed method. Furthermore, our method effectively leverages the generalization capabilities of IMU-based modal representations, outperforming state-of-the-art methods in various task settings.
更多
查看译文
关键词
wearable sensors,multimodal network,human activity recognition,continual learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要