Prompting to Prompt for Rehearsal-Free Class Incremental Learning

Guangzhi Zhao, Yuting Hou,Kedian Mu

ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)(2024)

引用 0|浏览0
暂无评分
摘要
Class incremental learning (CIL) devotes to addressing catastrophic forgetting while continually learning new tasks. Recently, prompt tuning techniques based on vision transformers (ViT) have achieved promising results in rehearsal-free CIL. To alleviate forgetting, representative methods use a query-key mechanism to generate prompts and attach them to the frozen pre-trained ViT. However, these methods neglect the effect of query, and the learning capacity of the model is limited due to unsuitable prompts. In this paper, we propose a new approach called Prompting to Prompt (P2P). Instead of using a task-independent query function, we learn sample queries together with prompts in response to the shift of data distribution in CIL. P2P can better separate classes across tasks because the generated prompts are effective and more discriminative sample features can be extracted. Besides, the whole training process is end-to-end and queries are decided by prompts themselves, which avoids additional parameters. P2P improves the plasticity of model while maintaining good resistance to forgetting in the long task sequence. Experiments show that our approach achieves state-of-the-art results with even fewer parameters.
更多
查看译文
关键词
Prompt,Catastrophic Forgetting,Class Incremental Learning,Vision Transformer
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要