谷歌Chrome浏览器插件
订阅小程序
在清言上使用

Deep Dimension Reduction for Supervised Representation Learning

IEEE TRANSACTIONS ON INFORMATION THEORY(2024)

引用 0|浏览22
暂无评分
摘要
The goal of supervised representation learning is to construct effective data representations for prediction. Among all the characteristics of an ideal nonparametric representation of high-dimensional complex data, sufficiency, low dimensionality and disentanglement are some of the most essential ones. We propose a deep dimension reduction approach to learning representations with these characteristics. The proposed approach is a nonparametric generalization of the sufficient dimension reduction method. We formulate the ideal representation learning task as that of finding a nonparametric representation that minimizes an objective function characterizing conditional independence and promoting disentanglement at the population level. We then estimate the target representation at the sample level nonparametrically using deep neural networks. We show that the estimated deep nonparametric representation is consistent in the sense that its excess risk converges to zero. Our extensive numerical experiments using simulated and real benchmark data demonstrate that the proposed methods have better performance than several existing dimension reduction methods and the standard deep learning models in the context of classification and regression.
更多
查看译文
关键词
Dimensionality reduction,Representation learning,Estimation,Vectors,Linear programming,Data models,Covariance matrices,Conditional independence,distance covariance,f-divergence,nonparametric estimation,neural networks
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要