Ensemble Knowledge Distillation for Edge Intelligence in Medical Applications

Studies in computational intelligence(2023)

Cited 0|Views0
No score
Abstract
The “bucket of models” ensemble technique for search of the best-per-family model and then the best-among-families model of some DNNDeep Neural Network architecture was applied to choose the absolutely best model for each separate medical problem. It is based on usage of the previously obtained knowledge distillationKnowledge distillation (KD) approaches where an ensemble of “student” DNNs can be trained with regard to set of various flavours of teacher family with the same architecture. The bucket of student-teacher models is applied to various medical data from quite different medical domains with the purpose to check feasibility of such approaches for the practical medical datasets for Edge Intelligence devices with the limited computational abilities. The training and validation runs were performed on the standard datasets like CIFAR10/CIFAR100 in comparison to the specific medical datasets like MedMNISTMedMNIST including medical data of the quite various types and complexity. Several families of ResNet DNNDeep Neural Network architecture families were used and they demonstrated various performance on the standard CIFAR10/CIFAR100 and specific medical MedMNISTMedMNIST datasets. As a result, no relationship was found between CIFAR10/CIFAR100 performance and MedMNISTMedMNIST performance, the choice of model family effects performance more than some model within a family, the significant boost in performance can be obtained by smaller models, some powerful CIFAR10/CIFAR100 architectures were unnecessarily large for MedMNISTMedMNIST and can be made more parameter-efficient (smaller) without a significant drop in performance. In the future research the more specific combination of distillation approaches is planned to be used with taking into account the finer hyperparameter tuning, data augmentation procedures, and their additional influences on the original data of MedMNISTMedMNIST datasets.
More
Translated text
Key words
ensemble knowledge distillation,knowledge intelligence
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined