Chrome Extension
WeChat Mini Program
Use on ChatGLM

Sparsifying Generalized Linear Models

STOC 2024 Proceedings of the 56th Annual ACM Symposium on Theory of Computing(2024)

Cited 0|Views28
No score
Abstract
We consider the sparsification of sums F : ℝ^n →ℝ where F(x) = f_1(⟨ a_1,x⟩) + ⋯ + f_m(⟨ a_m,x⟩) for vectors a_1,…,a_m ∈ℝ^n and functions f_1,…,f_m : ℝ→ℝ_+. We show that (1+ε)-approximate sparsifiers of F with support size n/ε^2 (logn/ε)^O(1) exist whenever the functions f_1,…,f_m are symmetric, monotone, and satisfy natural growth bounds. Additionally, we give efficient algorithms to compute such a sparsifier assuming each f_i can be evaluated efficiently. Our results generalize the classic case of ℓ_p sparsification, where f_i(z) = |z|^p, for p ∈ (0, 2], and give the first near-linear size sparsifiers in the well-studied setting of the Huber loss function and its generalizations, e.g., f_i(z) = min{|z|^p, |z|^2} for 0 < p ≤ 2. Our sparsification algorithm can be applied to give near-optimal reductions for optimizing a variety of generalized linear models including ℓ_p regression for p ∈ (1, 2] to high accuracy, via solving (log n)^O(1) sparse regression instances with m ≤ n(log n)^O(1), plus runtime proportional to the number of nonzero entries in the vectors a_1, …, a_m.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined