谷歌浏览器插件
订阅小程序
在清言上使用

Metric-Based Multimodal Meta-Learning for Human Movement Identification Via Footstep Recognition

2023 IEEE/SICE International Symposium on System Integration (SII)(2023)

引用 0|浏览22
暂无评分
摘要
One of the major challenges of human movement identification in indoor environments is sensitivity to many uncommon indoor interactions, such as falling off an object or moving a chair. This work investigates human footstep movements using multiple modalities and analyzes their representations from a small self-collected dataset of acoustic and vibration-based sensors. The core idea of this study is to learn apparent similarities between two sensory traits (not limited to microphone and geophone) and combine representations from multiple sensors. For this purpose, we describe a novel metric-based learning approach that introduces a multimodal frame-work and uses deep audio and geophone encoders in Siamese configuration to design an adaptable and lightweight self-supervised model to detect human movements. This framework eliminates the need for expensive data labeling procedures and learns general-purpose representations from low multisensory data obtained from omnipresent sensing systems. Next, we learn temporal and spatial features extracted from audio and geophone signals using this expressive design. Then, we extract the representations in a shared space to maximize the learning of a compatibility function between acoustic and geophone features. We effectively detected human movement from multiple sensor modalities with a 99.9% accuracy from the learned model. This method increases the identification of human movements in indoor-sensitive environments. We also propose further investigation to demonstrate generalization and effectiveness by conducting extensive experiments on datasets from various disciplines and in multiple settings.
更多
查看译文
关键词
robot audition,multimodal,siamese neural network,multi-stream networks,human movement detection,audio representation learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要