Mixture of Experts for Network Optimization: A Large Language Model-enabled Approach
CoRR(2024)
摘要
Optimizing various wireless user tasks poses a significant challenge for
networking systems because of the expanding range of user requirements. Despite
advancements in Deep Reinforcement Learning (DRL), the need for customized
optimization tasks for individual users complicates developing and applying
numerous DRL models, leading to substantial computation resource and energy
consumption and can lead to inconsistent outcomes. To address this issue, we
propose a novel approach utilizing a Mixture of Experts (MoE) framework,
augmented with Large Language Models (LLMs), to analyze user objectives and
constraints effectively, select specialized DRL experts, and weigh each
decision from the participating experts. Specifically, we develop a gate
network to oversee the expert models, allowing a collective of experts to
tackle a wide array of new tasks. Furthermore, we innovatively substitute the
traditional gate network with an LLM, leveraging its advanced reasoning
capabilities to manage expert model selection for joint decisions. Our proposed
method reduces the need to train new DRL models for each unique optimization
problem, decreasing energy consumption and AI model implementation costs. The
LLM-enabled MoE approach is validated through a general maze navigation task
and a specific network service provider utility maximization task,
demonstrating its effectiveness and practical applicability in optimizing
complex networking systems.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要