Large-scale Pre-trained Models are Surprisingly Strong in Incremental Novel Class Discovery
arxiv(2023)
摘要
Discovering novel concepts in unlabelled datasets and in a continuous manner
is an important desideratum of lifelong learners. In the literature such
problems have been partially addressed under very restricted settings, where
novel classes are learned by jointly accessing a related labelled set (e.g.,
NCD) or by leveraging only a supervisedly pre-trained model (e.g., class-iNCD).
In this work we challenge the status quo in class-iNCD and propose a learning
paradigm where class discovery occurs continuously and truly unsupervisedly,
without needing any related labelled set. In detail, we propose to exploit the
richer priors from strong self-supervised pre-trained models (PTM). To this
end, we propose simple baselines, composed of a frozen PTM backbone and a
learnable linear classifier, that are not only simple to implement but also
resilient under longer learning scenarios. We conduct extensive empirical
evaluation on a multitude of benchmarks and show the effectiveness of our
proposed baselines when compared with sophisticated state-of-the-art methods.
The code is open source.
更多查看译文
AI 理解论文
溯源树
样例
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要