Chrome Extension
WeChat Mini Program
Use on ChatGLM

DeepQGHO: Quantized Greedy Hyperparameter Optimization in Deep Neural Networks for on-the-fly Learning

IEEE ACCESS(2021)

Cited 8|Views4
No score
Abstract
Hyperparameter optimization or tuning plays a significant role in the performance and reliability of deep learning (DL). Many hyperparameter optimization algorithms have been developed for obtaining better validation accuracy in DL training. Most state-of-the-art hyperparameters are computationally expensive due to a focus on validation accuracy. Therefore, they are unsuitable for online or on-the-fly training applications which require computational efficiency. In this paper, we develop a novel greedy approach-based hyperparameter optimization (GHO) algorithm for faster training applications, e.g., on-the-fly training. We perform an empirical study to compute the performance such as computation time and energy consumption of the GHO and compare it with two state-of-the-art hyperparameter optimization algorithms. We also deploy the GHO algorithm in an edge device to validate the performance of our algorithm. We perform post-training quantization to the GHO algorithm to reduce inference time and latency.
More
Translated text
Key words
Greedy algorithm, deep learning, neural networks, online learning, hyperparameter optimization, TinyML, quantization
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined