How Can Large Language Models Understand Spatial-Temporal Data?
CoRR(2024)
摘要
While Large Language Models (LLMs) dominate tasks like natural language
processing and computer vision, harnessing their power for spatial-temporal
forecasting remains challenging. The disparity between sequential text and
complex spatial-temporal data hinders this application. To address this issue,
this paper introduces STG-LLM, an innovative approach empowering LLMs for
spatial-temporal forecasting. We tackle the data mismatch by proposing: 1)
STG-Tokenizer: This spatial-temporal graph tokenizer transforms intricate graph
data into concise tokens capturing both spatial and temporal relationships; 2)
STG-Adapter: This minimalistic adapter, consisting of linear encoding and
decoding layers, bridges the gap between tokenized data and LLM comprehension.
By fine-tuning only a small set of parameters, it can effectively grasp the
semantics of tokens generated by STG-Tokenizer, while preserving the original
natural language understanding capabilities of LLMs. Extensive experiments on
diverse spatial-temporal benchmark datasets show that STG-LLM successfully
unlocks LLM potential for spatial-temporal forecasting. Remarkably, our
approach achieves competitive performance on par with dedicated SOTA methods.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要