D3K: Dynastic Data-Free Knowledge Distillation

IEEE TRANSACTIONS ON MULTIMEDIA(2023)

引用 0|浏览0
暂无评分
摘要
Data-free knowledge distillation further broadens the applications of the distillation model. Nevertheless, the problem of providing diverse data with rich expression patterns needs to be further explored. In this paper, a novel dynastic data-free knowledge distillation ((DK)-K-3) model is proposed to alleviate this problem. In this model, a dynastic supernet generator (D-SG) with a flexible network structure is proposed to generate diverse data. The D-SG can adaptively alter architectural configurations and activate different subnet generators in different sequential iteration spaces. The variable network structure increases the complexity and capacity of the generator, and strengthens its ability to generate diversified data. In addition, a novel additive constraint based on the differentiable dhash (D-Dhash) is designed to guide the structure parameter selection of the D-SG. This constraint forces the D-SG to constantly jump out of the fixed generation mode and generate diverse data in semantics and instance. The effectiveness of the proposed model is verified on the experimental benchmark datasets (MNIST, CIFAR-10, CIFAR-100, and SVHN).
更多
查看译文
关键词
Generators,Data models,Knowledge engineering,Training data,Neural networks,Training,Task analysis,Dhash,dynastic network,image classification,generated data diversity,knowledge distillation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要