Continual Learning of LSTM Using Ant Colony Optimization

2023 IEEE Congress on Evolutionary Computation (CEC)(2023)

Cited 0|Views3
No score
Abstract
The premise of continual learning is to continuously learn new tasks using the same model in an environment where multiple tasks are given sequentially while retaining the knowledge learned in previous tasks. Typical deep learning models suffer from catastrophic forgetting, in which knowledge of past tasks is drastically lost when learning new tasks, and LSTM (Long Short-Term Memory) is no exception. One of the promising methods for continual learning are replay-based methods. However, they are prone to overfitting, harming generalization. Meanwhile, Ant Colony Optimization (ACO) is an algorithm widely used for combinatorial optimization problems and has been applied to the structural optimization of LSTM. In this study, we propose a continual learning method for LSTM using ACO to reduce catastrophic forgetting. The method iteratively optimizes the internal structure of the LSTM in parallel with the training of the current task, using the model performance as fitness. It also extends the replay-based method and utilizes two kinds of memory buffers to reduce overfitting to the memory. The proposed method was tested on four benchmark problems, and the results indicate its effectiveness, especially in the small memory size scenarios.
More
Translated text
Key words
continual learning,long short-term memory,ant colony optimization,replay-based method
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined