Edge Intelligence Optimization for Large Language Model Inference with Batching and Quantization
CoRR(2024)
摘要
Generative Artificial Intelligence (GAI) is taking the world by storm with
its unparalleled content creation ability. Large Language Models (LLMs) are at
the forefront of this movement. However, the significant resource demands of
LLMs often require cloud hosting, which raises issues regarding privacy,
latency, and usage limitations. Although edge intelligence has long been
utilized to solve these challenges by enabling real-time AI computation on
ubiquitous edge resources close to data sources, most research has focused on
traditional AI models and has left a gap in addressing the unique
characteristics of LLM inference, such as considerable model size,
auto-regressive processes, and self-attention mechanisms. In this paper, we
present an edge intelligence optimization problem tailored for LLM inference.
Specifically, with the deployment of the batching technique and model
quantization on resource-limited edge devices, we formulate an inference model
for transformer decoder-based LLMs. Furthermore, our approach aims to maximize
the inference throughput via batch scheduling and joint allocation of
communication and computation resources, while also considering edge resource
constraints and varying user requirements of latency and accuracy. To address
this NP-hard problem, we develop an optimal Depth-First Tree-Searching
algorithm with online tree-Pruning (DFTSP) that operates within a feasible time
complexity. Simulation results indicate that DFTSP surpasses other batching
benchmarks in throughput across diverse user settings and quantization
techniques, and it reduces time complexity by over 45
brute-force searching method.
更多查看译文
关键词
Generative AI,large language model,edge intelligence,wireless networks
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要