Learning spatiotemporal dynamics with a pretrained generative model

crossref(2024)

引用 0|浏览6
暂无评分
摘要
Abstract Reconstructing spatiotemporal dynamics with sparse sensor measurement is an outstanding problem, commonly encountered in a wide spectrum of scientific and engineering applications. Such a problem is particularly challenging when the number and/or types of sensors (e.g., randomly placed) are extremely insufficient. Existing end-to-end learning models ordinarily suffer from the generalization issue for full-field reconstruction of spatiotemporal dynamics, especially in sparse data regimes typically seen in real-world applications. To this end, we propose a sparse-sensor-assisted score-based generative model (S3GM) to reconstruct and predict full-field spatiotemporal dynamics based on sparse measurements. Instead of learning directly the mapping between input and output pairs, an unconditioned generative model is firstly pretrained capturing the joint distribution of a vast group of pretraining data in a self-supervised manner, followed then by a sampling process conditioned on unseen sparse measurement. The efficacy of S3GM has been verified on multiple dynamical systems with various synthetic, real-world, and lab-test datasets (ranging from turbulent flow modeling to weather/climate forecasting). The results demonstrate the excellent performance of S3GM in zero-shot reconstruction and prediction of spatiotemporal dynamics even with high levels of data sparsity and noise. We find that S3GM exhibits high accuracy, generalizability, and robustness when handling different reconstruction tasks.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要