Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

Machine Learning(2014)

引用 78|浏览81
暂无评分
摘要
We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ _0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.
更多
查看译文
关键词
Zero-norm,Feature selection,Exact penalty,DC programming,DCA
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要