Adaptively Customizing Activation Functions for Various Layers

arxiv(2023)

引用 4|浏览15
暂无评分
摘要
To enhance the nonlinearity of neural networks and increase their mapping abilities between the inputs and response variables, activation functions play a crucial role to model more complex relationships and patterns in the data. In this work, a novel methodology is proposed to adaptively customize activation functions only by adding very few parameters to the traditional activation functions such as Sigmoid, Tanh, and rectified linear unit (ReLU). To verify the effectiveness of the proposed methodology, some theoretical and experimental analysis on accelerating the convergence and improving the performance is presented, and a series of experiments are conducted based on various network models (such as AlexNet, VggNet, GoogLeNet, ResNet and DenseNet), and various datasets (such as CIFAR10, CIFAR100, miniImageNet, PASCAL VOC, and COCO). To further verify the validity and suitability in various optimization strategies and usage scenarios, some comparison experiments are also implemented among different optimization strategies (such as SGD, Momentum, AdaGrad, AdaDelta, and ADAM) and different recognition tasks such as classification and detection. The results show that the proposed methodology is very simple but with significant performance in convergence speed, precision, and generalization, and it can surpass other popular methods such as ReLU and adaptive functions such as Swish in almost all experiments in terms of overall performance.
更多
查看译文
关键词
Neural networks,Training,Shape,Adaptive systems,Mathematical models,Learning systems,Deep learning,Adaptable parameters,adaptive activation function,deep learning,various layers
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要