Optimal solutions to the isotonic regression problem.

Alexander I. Jordan,Anja Mühlemann,Johanna F. Ziegel

arXiv: Statistics Theory(2019)

引用 0|浏览0
暂无评分
摘要
In general, the solution to a regression problem is the minimizer of a given loss criterion, and as such depends on the specified loss function. The non-parametric isotonic regression problem is special, in that optimal solutions can be found by solely specifying a functional. These solutions will then be minimizers under all loss functions simultaneously as long as the loss functions have the requested functional as the Bayes act. The functional may be set-valued. The only requirement is that it can be defined via an identification function, with examples including the expectation, quantile, and expectile functionals. Generalizing classical results, we characterize the optimal solutions to the isotonic regression problem for such functionals in the case of totally and partially ordered explanatory variables. For total orders, we show that any solution resulting from the pool-adjacent-violators (PAV) algorithm is optimal. It is noteworthy, that simultaneous optimality is unattainable in the unimodal regression problem, despite its close connection.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要