Hyperparameter Optimization Machines

2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA)(2016)

引用 29|浏览60
暂无评分
摘要
Algorithm selection and hyperparameter tuning are omnipresent problems for researchers and practitioners. Hence, it is not surprising that the efforts in automatizing this process using various meta-learning approaches have been increased. Sequential model-based optimization (SMBO) is ne of the most popular frameworks for finding optimal hyperparameter configurations. Originally designed for black-box optimization, researchers have contributed different meta-learning approaches to speed up the optimization process. We create a generalized framework of SMBO and its recent additions which gives access to adaptive hyperparameter transfer learning with simple surrogates (AHT), a new class of hyperparameter optimization strategies. AHT provides less time-overhead for the optimization process by replacing time-and space-consuming transfer surrogate models with simple surrogates that employ adaptive transfer learning. In an empirical comparison on two different meta-data sets, we can show that AHT outperforms various instances of the SMBO framework in the scenarios of hyperparameter tuning and algorithm selection.
更多
查看译文
关键词
hyperparameter optimization,meta-learning,transfer learning
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要