Learning Tree Conditional Random Fields.

ICML'10: Proceedings of the 27th International Conference on International Conference on Machine Learning(2010)

引用 62|浏览150
暂无评分
摘要
We examine maximum spanning tree-based methods for learning the structure of tree Conditional Random Fields (CRFs) P ( Y | Χ ). We use edge weights which take advantage of local inputs Χ and thus scale to large problems. For a general class of edge weights, we give a negative learnability result. However, we demonstrate that two members of the class–local Conditional Mutual Information and Decomposable Conditional Influence– have reasonable theoretical bases and perform very well in practice. On synthetic data and a large-scale fMRI application, our methods outperform existing techniques.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要