谷歌浏览器插件
订阅小程序
在清言上使用

Sparse Regression Faster than d.

ArXiv(2021)

引用 2|浏览2
暂无评分
摘要
The current complexity of regression is nearly linear in the complexity of matrix multiplication/inversion. Here we show that algorithms for 2-norm regression, i.e., standard linear regression, as well as p-norm regression (for 1 < p < ∞) can be improved to go below the matrix multiplication threshold for sufficiently sparse matrices. We also show that for some values of p, the dependence on dimension in input-sparsity time algorithms can be improved beyond d for tall-and-thin row-sparse matrices.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要