TaskDrop: A competitive baseline for continual learning of sentiment classification

Neural Networks(2022)

Cited 2|Views16
No score
Abstract
In this paper, we study the multi-task sentiment classification problem in the continual learning setting, i.e., a model is sequentially trained to classify the sentiment of reviews of products in a particular category. The use of common sentiment words in reviews of different product categories leads to large cross-task similarity, which differentiates it from continual learning in other domains. This knowledge sharing nature renders forgetting reduction focused approaches less effective for the problem under consideration. Unlike existing approaches, where task-specific masks are learned with specifically presumed training objectives, we propose an approach called Task-aware Dropout (TaskDrop) to randomly sample a binary mask for each task. While the standard dropout generates and applies random masks for each training instance per epoch for regularization, random masks in TaskDrop are used for model capacity allocation and reuse to each coming task. We conducted experimental studies on Amazon review data and made comparison to various baselines and state-of-the-art approaches. Our empirical results show that regardless of simplicity, TaskDrop overall achieved competitive performance, especially after relatively long term learning. This demonstrates that the proposed random capacity allocation mechanism works well for continual sentiment classification.
More
Translated text
Key words
Continual learning,Sentiment classification,Catastrophic forgetting,Knowledge transfer,Random masking
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined