Training-free neural architecture search: A review

Meng -Ting Wu,Chun -Wei Tsai

ICT EXPRESS(2024)

引用 0|浏览2
暂无评分
摘要
The goal of neural architecture search (NAS) is to either downsize the neural architecture and model of a deep neural network (DNN), adjust a neural architecture to improve its end result, or even speed up the whole training process. Such improvements make it possible to generate or install the model of a DNN on a small device, such as a device of internet of things or wireless sensor network. Because most NAS algorithms are time-consuming, finding out a way to reduce their computation costs has now become a critical research issue. The training-free method (also called the zero-shot learning) provides an alternative way to estimate how good a neural architecture is more efficiently during the process of NAS by using a lightweight score function instead of a general training process to avoid incurring heavy costs. This paper starts with a brief discussion of DNN and NAS, followed by a brief review of both model-dependent and model-independent training-free score functions. A brief introduction to the search algorithms and benchmarks that were widely used in a training-free NAS will also be given in this paper. The changes, potential, open issues, and future trends of this research topic are then addressed in the end of this paper. (c) 2023 The Author(s). Published by Elsevier B.V. on behalf of The Korean Institute of Communications and Information Sciences. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
更多
查看译文
关键词
Neural architecture search,Deep neural network,Training -free,Zero -shot,Internet of things
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要