Manas: multi-agent neural architecture search

arxiv(2024)

引用 6|浏览325
暂无评分
摘要
The Neural Architecture Search (NAS) problem is typically formulated as a graph search problem where the goal is to learn the optimal operations over edges in order to maximize a graph-level global objective. Due to the large architecture parameter space, efficiency is a key bottleneck preventing NAS from its practical use. In this work, we address the issue by framing NAS as a multi-agent problem where agents control a subset of the network and coordinate to reach optimal architectures. We provide two distinct lightweight implementations, with reduced memory requirements (1/8th of state-of-the-art), and performances above those of much more computationally expensive methods. Theoretically, we demonstrate vanishing regrets of the form 𝒪(√(T)) , with T being the total number of rounds. Finally, we perform experiments on CIFAR-10 and ImageNet, and aware that random search and random sampling are (often ignored) effective baselines, we conducted additional experiments on 3 alternative datasets, with complexity constraints, and 2 network configurations, and achieve competitive results in comparison with the baselines and other methods.
更多
查看译文
关键词
Neural architecture search,Multi arm bandits,AutoML,Computer vision,Object recognition
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要