RadarNet: A parallel spatiotemporal encoder network for radar extrapolation

Wei Tian,Lei Yi, Xianghua Niu, Rong Fang, Lixia Zhang, Huanhuan Liu, Zhuo Xu,Shengqin Jiang,Yonghong Zhang

Neurocomputing(2024)

Cited 0|Views3
No score
Abstract
Radar extrapolation has been one of the most important means for nowcasting. Most current models achieve good performance in high-frequency sequences (e.g., video, more than 24 fps), while the temporal resolution of radar echo sequences is much lower (1 frame every 6 min) and the transforms are much more complex. The spatiotemporal characters with some similarities would not change a lot in video sequences; however, the radar echo sequences include more intangible changes (e.g., the echo evolution of generation or vanish, and so on), which leads to unique distinct spatial and temporal characters, respectively. Therefore, the singular peculiarity would be mitigated, leading to a rapid decline in precision and sharpness during the extrapolation process. In general, temporal feature extraction is utilized to understand the variation in pixel locations, while spatial feature extraction is employed to capture the distribution variation of specific regions. In this work, we propose a feature decomposition network, termed as RadarNet to improve the extrapolation precision. The parallel independent encoders are used to enhance multi-scale spatial feature extraction and temporal motion feature capture of radar echoes, respectively. In addition, we design a specialized cross fusion mechanism to achieve the inputs of the decoder which may enhance the performance of the extrapolation. The extrapolation experiments are conducted on real radar echo datasets from Shijiazhuang and Nanjing that demonstrate the effectiveness of our model.
More
Translated text
Key words
RadarNet,Radar extrapolation,Spatiotemporal prediction
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined