Dynamically Generated Compact Neural Networks for Task Progressive Learning

ISCAS(2020)

引用 1|浏览17
暂无评分
摘要
Task progressive learning is often required where the training data become available in batches over the time. Such learning has the characteristic of using an existing model trained over a set of tasks to learn a new task while maintaining the accuracy of older tasks. Artificial Neural Networks (ANNs) have a higher capacity for progressive learning than other traditional machine learning models due to the availability of a large number of ANN parameters. A progressive model that uses a fully connected ANN suffers from long training time, overfitting, and excessive resource usage. It is therefore necessary to generate the ANN incrementally as new tasks arrive and new training is needed. In this paper, an incremental algorithm is presented to dynamically generate a compact neural network by pruning and expanding the synaptic weights based on the learning requirements of the new tasks. The algorithm is implemented, analyzed, and validated using the cloud network security datasets, UNSW and AWID, as well as the image dataset, MNIST.
更多
查看译文
关键词
task progressive learning,training data,older tasks,artificial neural networks,traditional machine learning models,ANN parameters,progressive model,fully connected ANN,long training time,compact neural network,learning requirements,dynamically generated compact neural networks,UNSW,AWID,MNIST,image dataset,cloud network security datasets
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要