Cluster Distillation: Semi-supervised Time Series Classification through Clustering-based Self-supervision

2022 41st International Conference of the Chilean Computer Science Society (SCCC)(2022)

引用 0|浏览0
暂无评分
摘要
Time series have always raised great interest among scientists due to their multiple applications in real-world problems. In particular, time series classification using deep learning methods has recently attracted much attention and demonstrated remarkable performance. Unfortunately, most of the techniques studied so far assume that a fully-labeled dataset is available for training, a condition that limits the application of these methods in practice. In this paper, we present Cluster Distillation: a technique that leverages all the available data (labeled or unlabeled) for training a deep time series classifier. The method relies on a self-supervised mechanism that generates surrogate labels that guide learning when external supervisory signals are lacking. We create that mechanism by introducing clustering into a Knowledge Distillation framework in which a first neural net (the Teacher) transfers its beliefs about cluster memberships to a second neural net (the Student) which finally performs semi-supervised classification. Preliminary experiments in ten widely used datasets show that training a convolutional neural net (CNN) with the proposed technique leads to promising results, outperforming state-of-the-art methods in several relevant cases. The implementations are available on: ClusterDistillation
更多
查看译文
关键词
Deep learning,CNN,self-supervision,time series classification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要