Lifelong Event Detection with Embedding Space Separation and Compaction
arxiv(2024)
摘要
To mitigate forgetting, existing lifelong event detection methods typically
maintain a memory module and replay the stored memory data during the learning
of a new task. However, the simple combination of memory data and new-task
samples can still result in substantial forgetting of previously acquired
knowledge, which may occur due to the potential overlap between the feature
distribution of new data and the previously learned embedding space. Moreover,
the model suffers from overfitting on the few memory samples rather than
effectively remembering learned patterns. To address the challenges of
forgetting and overfitting, we propose a novel method based on embedding space
separation and compaction. Our method alleviates forgetting of previously
learned tasks by forcing the feature distribution of new data away from the
previous embedding space. It also mitigates overfitting by a memory calibration
mechanism that encourages memory data to be close to its prototype to enhance
intra-class compactness. In addition, the learnable parameters of the new task
are initialized by drawing upon acquired knowledge from the previously learned
task to facilitate forward knowledge transfer. With extensive experiments, we
demonstrate that our method can significantly outperform previous
state-of-the-art approaches.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要