Chrome Extension
WeChat Mini Program
Use on ChatGLM

Untangling Lariats: Subgradient Following of Variationally Penalized Objectives

Kai-Chia Mo,Shai Shalev-Shwartz, Nisæl Shártov

CoRR(2024)

Cited 0|Views12
No score
Abstract
We describe a novel subgradient following apparatus for calculating the optimum of convex problems with variational penalties. In this setting, we receive a sequence y_i,…,y_n and seek a smooth sequence x_1,…,x_n. The smooth sequence attains the minimum Bregman divergence to an input sequence with additive variational penalties in the general form of ∑_i g_i(x_i+1-x_i). We derive, as special cases of our apparatus, known algorithms for the fused lasso and isotonic regression. Our approach also facilitates new variational penalties such as non-smooth barrier functions. We next derive and analyze multivariate problems in which 𝐱_i,𝐲_i∈ℝ^d and variational penalties that depend on 𝐱_i+1-𝐱_i. The norms we consider are ℓ_2 and ℓ_∞ which promote group sparsity. Last but not least, we derive a lattice-based subgradient following for variational penalties characterized through the output of arbitrary convolutional filters. This paradigm yields efficient solvers for problems in which sparse high-order discrete derivatives such as acceleration and jerk are desirable.
More
Translated text
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined