Feedforward Neural Network Based on Nonmonotone Conjugate Gradient Method for Multiple Classification

2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)(2021)

Cited 0|Views1
No score
Abstract
Aiming at the problem that it is easily trapped into a local optimum when applying steepest gradient descent method to train feedforward neural network, in this paper we propose an algorithm based on non-monotone conjugate gradient to optimize the training process of feedforward neural network that used in the multiple classification task. More specifically, the proposed method combines the conjugate gradient descent method and the non-monotone linear search technique, and makes full use of the information of loss function and its gradients, resulting in a faster convergence rate without increasing the memory consumption than the traditional gradient descent method. Extensive experiments carried out on VCI database indicate that the proposed algorithm possesses a good generalization ability and works well in the multi-classification problem measured by accuracy and Macro-Fl score.
More
Translated text
Key words
feedforward neural network,multiple classification task,conjugate gradient descent method,nonmonotone conjugate gradient method,steepest gradient descent method,loss function,linear search technique
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined