EnCoDe: Enhancing Compressed Deep Learning Models Through Feature - - - Distillation and Informative Sample Selection.

Rebati Raman Gaire,Sepehr Tabrizchi,Arman Roohi

International Conference on Machine Learning and Applications(2023)

引用 0|浏览1
暂无评分
摘要
This paper presents Encode, a novel technique that merges active learning, model compression, and knowledge distillation to optimize deep learning models. The method tackles issues such as generalization loss, resource intensity, and data redundancy that usually impede compressed models' performance. It actively integrates valuable samples for labeling, thus enhancing the student model's performance while economizing on labeled data and computational resources. Encode's utility is empirically validated using SVHN and CIFAR-10 datasets, demonstrating improved model compactness, enhanced generalization, reduced computational complexity, and lessened labeling efforts. In our evaluations, applied to compressed versions of VGGll and AlexNet models, Encode consistently outperforms baselines even when trained with 60% of the total training samples. Thus, it establishes an effective framework for enhancing the accuracy and generalization capabilities of compressed models, which is especially beneficial in situations with limited resources and scarce labeled data.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要