Learning Probabalistic Graph Neural Networks for Multivariate Time Series Anomaly Detection

semanticscholar(2021)

引用 0|浏览2
暂无评分
摘要
AN anomaly is loosely defined as any observation which deviates so much from the remaining observations, so as to arouse suspicions that it was generated by a different mechanism [1]. In most applications, data is generated by a data generating process. Anomalies are generated when this process behaves unusually. Therefore, anomaly detection can provide several useful application specific insights, e.g. from detecting credit card fraud to tracking interesting sensor events. Consequently, detecting anomalies in high-dimensional multivariate time series (MTS) is an extremely important problem which has been extensively studied in literature [2]. However, most prior work on anomaly detection does not explicitly model the complex dependencies between different variables, which significantly limits their ability to detect anomalous events [3]. For instance, in a water treatment plant, if a motorized valve is maliciously turned on, it can cause an overflow on a tank, resulting in anomalous readings in most sensors associated with it1. Recently, Deng et al. [3] proposed the Graph Deviation Network (GDN) which automatically learns variable dependencies and uses them to identify anomalous behaviour. Like most Neural Networks (NNs), despite having an impressive accuracy, GDN produces poor uncertainty estimates. Since overconfident yet incorrect predictions may be harmful, precise uncertainty quantification is integral for practical applications of such networks [5]. To this end, we propose GLUE (GDN with Local Uncertainty Estimation) which not only automatically learns complex dependencies between different variables in a MTS and uses them to detect anomalous behaviour, but also models the uncertainty of predictions. Results on two real world datasets reveal that GLUE performs on par with GDN, outperforms most popular baseline models, and also learns meaningful dependencies between variables (Sec. 5). The rest of the paper is organized as follows. In Sec. 2 we briefly compare our work with prior work. Next, in Sec. 3 we describe the GDN and GLUE models in detail followed by an overview of our baselines. Sec. 4 and 5 discuss our datasets, experimental setup and results. We conclude the paper with Sec. 6.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要