Graph Neural Networks Automated Design and Deployment on Device-Edge Co-Inference Systems
arxiv(2024)
摘要
The key to device-edge co-inference paradigm is to partition models into
computation-friendly and computation-intensive parts across the device and the
edge, respectively. However, for Graph Neural Networks (GNNs), we find that
simply partitioning without altering their structures can hardly achieve the
full potential of the co-inference paradigm due to various
computational-communication overheads of GNN operations over heterogeneous
devices. We present GCoDE, the first automatic framework for GNN that
innovatively Co-designs the architecture search and the mapping of each
operation on Device-Edge hierarchies. GCoDE abstracts the device communication
process into an explicit operation and fuses the search of architecture and the
operations mapping in a unified space for joint-optimization. Also, the
performance-awareness approach, utilized in the constraint-based search process
of GCoDE, enables effective evaluation of architecture efficiency in diverse
heterogeneous systems. We implement the co-inference engine and runtime
dispatcher in GCoDE to enhance the deployment efficiency. Experimental results
show that GCoDE can achieve up to 44.9× speedup and 98.2% energy
reduction compared to existing approaches across various applications and
system configurations.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要