A Unified Weight Learning Paradigm For Multi-View Learning

22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89(2019)

引用 23|浏览26
暂无评分
摘要
Learning a set of weights to combine views linearly forms a series of popular schemes in multi-view learning. Three weight learning paradigms, i.e., Norm Regularization (NR), Exponential Decay (ED), and p-th Root Loss (pRL), are widely used in the literature, while the relations between them and the limiting behaviors of them are not well understood yet. In this paper, we present a Unified Paradigm (UP) that contains the aforementioned three popular paradigms as special cases. Specifically, we extend the domain of hyper-parameters of NR from positive to real numbers and show this extension bridges NR, ED, and pRL. Besides, we provide detailed discussion on the weights sparsity, hyperparameter setting, and counterintuitive limiting behavior of these paradigms. Furthermore, we show the generality of our technique with examples in Multi-Task Learning and Fuzzy Clustering. Our results may provide in-sights to understand existing algorithms better and inspire research on new weight learning schemes. Numerical results support our theoretical analysis.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要