Multi-objective Neural Architecture Search via Non-stationary Policy Gradient

arxiv(2020)

引用 0|浏览40
暂无评分
摘要
Multi-objective Neural Architecture Search (NAS) aims to discover novel architectures in the presence of multiple conflicting objectives. Recent approaches based on scalarization and evolution have yielded promising results, but the problem of approximating the full Pareto front accurately and efficiently remains challenging. To this end, we explore in this work the novel reinforcement learning based paradigm of non-stationary policy gradient (NPG). NPG utilizes a non-stationary reward function, and encourages a continuous adaptation of the policy to capture the entire Pareto front efficiently. We introduce two novel reward functions with elements from scalarization and evolution. To handle non-stationarity, we propose a new exploration scheme using cosine temperature decay with warm restarts. For fast and accurate architecture evaluation, we introduce a novel pre-trained shared model that we continuously fine-tune throughout training. Our extensive experimental study on CIFAR-10, CIFAR-100, and ImageNet shows that our framework can uncover a representative Pareto front at fast speeds, while achieving superior predictive performance than other multi-objective NAS methods, and many state-of-the-art NAS methods at similar network sizes. Our work demonstrates the potential of NPG as a simple, fast, and effective paradigm for multi-objective NAS.
更多
查看译文
关键词
architecture,search,multi-objective,non-stationary
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要