GraphTranslator: Aligning Graph Model to Large Language Model for Open-ended Tasks
CoRR(2024)
摘要
Large language models (LLMs) like ChatGPT, exhibit powerful zero-shot and
instruction-following capabilities, have catalyzed a revolutionary
transformation across diverse research fields of artificial intelligence,
especially for open-ended tasks. While the idea is less explored in the graph
domain, despite the availability of numerous powerful graph models (GMs), they
are restricted to tasks in a pre-defined form. Although several methods
applying LLMs to graphs have been proposed, they fail to simultaneously handle
the pre-defined and open-ended tasks, with LLM as a node feature enhancer or as
a standalone predictor. To break this dilemma, we propose to bridge the
pretrained GM and LLM by a Translator, named GraphTranslator, aiming to
leverage GM to handle the pre-defined tasks effectively and utilize the
extended interface of LLMs to offer various open-ended tasks for GM. To train
such Translator, we propose a Producer capable of constructing the graph-text
alignment data along node information, neighbor information and model
information. By treating the node representation as a type of language, the
proposed GraphTranslator empowers an LLM to make predictions based on node
representation and language instructions, providing a unified perspective for
both pre-defined and open-ended tasks. Extensive results show that the proposed
GraphTranslator effectively improves the results of zero-shot node
classification. The graph question answering experiments reveal our
GraphTranslator potential across a broad spectrum of open-ended applications
through language instructions.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要