Continual Representation Learning via Auto-Weighted Latent Embeddings on Person ReID

Tianjun Huang,Weiwei Qu,Jianguo Zhang

PATTERN RECOGNITION AND COMPUTER VISION,, PT III(2021)

引用 3|浏览7
暂无评分
摘要
Popular deep neural network models in artificial intelligence systems are found having catastrophic forgetting problem: when learning on a sequence of tasks, deep networks tend to only achieve high performance on the current task, while losing performance on previously learned tasks. This issue is often addressed by continual learning or lifelong learning. The majority of existing continual learning approaches adopt class incremental strategy, which will continuously expand the network structure. Representation learning, which only leverages the feature vector before classification layer, is able to maintain the model capacity in continual learning. However, recent continual representation learning methods are not well evaluated on unseen classes. In this paper, we pay attention to the performance of continual representation learning on unseen classes, and propose a novel auto-weighted latent embeddings method. For each task, autoencoders are developed to reconstruct feature maps from different levels in the neural network. The embeddings generated by these autoencoders on the manifolds are constrained when learning a new task so as to preserve the knowledge in previous tasks. An adapted auto-weighted approach is developed in this paper to assign different levels of importance to the embeddings based on reconstruction errors. Our experiments on three widely used Person Re-identification datasets expose the existence of catastrophic forgetting problem for representation learning on unseen classes, and demonstrate that our proposed method outperforms other related methods in continual representation learning setup.
更多
查看译文
关键词
Continual learning,Representation learning,Person Re-Identification
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要