Weakly Convex Regularisers for Inverse Problems: Convergence of Critical Points and Primal-Dual Optimisation
CoRR(2024)
摘要
Variational regularisation is the primary method for solving inverse
problems, and recently there has been considerable work leveraging deeply
learned regularisation for enhanced performance. However, few results exist
addressing the convergence of such regularisation, particularly within the
context of critical points as opposed to global minima. In this paper, we
present a generalised formulation of convergent regularisation in terms of
critical points, and show that this is achieved by a class of weakly convex
regularisers. We prove convergence of the primal-dual hybrid gradient method
for the associated variational problem, and, given a Kurdyka-Lojasiewicz
condition, an 𝒪(logk/k) ergodic convergence rate. Finally,
applying this theory to learned regularisation, we prove universal
approximation for input weakly convex neural networks (IWCNN), and show
empirically that IWCNNs can lead to improved performance of learned adversarial
regularisers for computed tomography (CT) reconstruction.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要