AI-KD: Towards Alignment Invariant Face Image Quality Assessment Using Knowledge Distillation
CoRR(2024)
Abstract
Face Image Quality Assessment (FIQA) techniques have seen steady improvements
over recent years, but their performance still deteriorates if the input face
samples are not properly aligned. This alignment sensitivity comes from the
fact that most FIQA techniques are trained or designed using a specific face
alignment procedure. If the alignment technique changes, the performance of
most existing FIQA techniques quickly becomes suboptimal. To address this
problem, we present in this paper a novel knowledge distillation approach,
termed AI-KD that can extend on any existing FIQA technique, improving its
robustness to alignment variations and, in turn, performance with different
alignment procedures. To validate the proposed distillation approach, we
conduct comprehensive experiments on 6 face datasets with 4 recent face
recognition models and in comparison to 7 state-of-the-art FIQA techniques. Our
results show that AI-KD consistently improves performance of the initial FIQA
techniques not only with misaligned samples, but also with properly aligned
facial images. Furthermore, it leads to a new state-of-the-art, when used with
a competitive initial FIQA approach. The code for AI-KD is made publicly
available from: https://github.com/LSIbabnikz/AI-KD.
MoreTranslated text
AI Read Science
Must-Reading Tree
Example
![](https://originalfileserver.aminer.cn/sys/aminer/pubs/mrt_preview.jpeg)
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined