Discriminant Feature Extraction by Generalized Difference Subspace

IEEE Transactions on Pattern Analysis and Machine Intelligence(2023)

引用 10|浏览57
暂无评分
摘要
In this paper, we reveal the discriminant capacity of orthogonal data projection onto the generalized difference subspace (GDS), both theoretically and experimentally. In our previous work, we demonstrated that the GDS projection works as a quasi-orthogonalization of class subspaces, which is an effective feature extraction for subspace based classifiers. Here, we further show that GDS projection also works as a discriminant feature extraction through a similar mechanism to the Fisher discriminant analysis (FDA). A direct proof of the connection between GDS projection and FDA is difficult due to the significant difference in their formulations. To circumvent the complication, we first introduce geometrical Fisher discriminant analysis (gFDA) based on a simplified Fisher criterion. It is derived from a heuristic yet practically plausible assumption: the direction of the sample mean vector of a class is largely aligned to the first principal component vector of the class, given that the principal component analysis (PCA) is applied without data centering. gFDA works stably even under few samples, bypassing the small sample size (SSS) problem of FDA. We then prove that gFDA is equivalent to GDS projection with a small correction term. This equivalence ensures GDS projection to inherit the discriminant ability from FDA via gFDA. Furthermore, we discuss two useful extensions of these methods, 1) a nonlinear extension by kernel trick, 2) a combination with CNN features. The equivalence and the effectiveness of the extensions have been verified through extensive experiments on the extended Yale B+, CMU face database, ALOI, ETH80, MNIST, and CIFAR10, mainly focusing on image recognition under small samples.
更多
查看译文
关键词
Discriminant analysis,fisher criterion,subspace representation,PCA without data centering
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要