Adversarial Danger Identification on Temporally Dynamic Graphs.

IEEE transactions on neural networks and learning systems(2024)

引用 1|浏览44
暂无评分
摘要
Multivariate time series forecasting plays an increasingly critical role in various applications, such as power management, smart cities, finance, and healthcare. Recent advances in temporal graph neural networks (GNNs) have shown promising results in multivariate time series forecasting due to their ability to characterize high-dimensional nonlinear correlations and temporal patterns. However, the vulnerability of deep neural networks (DNNs) constitutes serious concerns about using these models to make decisions in real-world applications. Currently, how to defend multivariate forecasting models, especially temporal GNNs, is overlooked. The existing adversarial defense studies are mostly in static and single-instance classification domains, which cannot apply to forecasting due to the generalization challenge and the contradiction issue. To bridge this gap, we propose an adversarial danger identification method for temporally dynamic graphs to effectively protect GNN-based forecasting models. Our method consists of three steps: 1) a hybrid GNN-based classifier to identify dangerous times; 2) approximate linear error propagation to identify the dangerous variates based on the high-dimensional linearity of DNNs; and 3) a scatter filter controlled by the two identification processes to reform time series with reduced feature erasure. Our experiments, including four adversarial attack methods and four state-of-the-art forecasting models, demonstrate the effectiveness of the proposed method in defending forecasting models against adversarial attacks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要