Continuous transfer of neural network representational similarity for incremental learning.

Neurocomputing(2023)

引用 6|浏览10
暂无评分
摘要
•Method: Pre-trained Model Knowledge Distillation (PMKD) for incremental learning.•Feature representation knowledge transferred via PMKD.•PMKD combined with replay yields competitive performance in incremental learning.
更多
查看译文
关键词
Incremental learning, Pre-trained model, Knowledge distillation, Neural network representation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要