Chances of Interpretable Transfer Learning for Human Activity Recognition in Warehousing

COMPUTATIONAL LOGISTICS (ICCL 2021)(2021)

引用 2|浏览4
暂无评分
摘要
Human activity recognition evolves around classifying and analyzing workers' actions quantitatively using convolutional neural networks on the time-series data provided by inertial measurement units and motion capture systems. However, this requires expensive training datasets since each warehouse scenario has slightly different settings and activities of interest. Here, transfer learning promises to shift the knowledge a deep learning method gained on existing reference data to new target data. We benchmark interpretable and non-interpretable transfer learning for human activity recognition on the LARa order-picking dataset with AndyLab and RealDisp as domain-related and domain-foreign reference datasets. We find that interpretable transfer learning via the recently proposed probabilistic rule stacking learner, which does not require any labeled data on the target dataset, is possible if the labels are sufficiently semantically related. The success depends on the proximity of the reference and target domains and labels. Non-interpretable transfer learning via fine-tuning can be applied even if there is a major domain-shift between the datasets and reduces the amount of labeled data required on the target dataset.
更多
查看译文
关键词
Domain-shift, Few-shot learning, Interpretability, Logistics, Multi-label classification, Time-series, Zero-shot learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要