Learning truly monotone operators with applications to nonlinear inverse problems
arxiv(2024)
摘要
This article introduces a novel approach to learning monotone neural networks
through a newly defined penalization loss. The proposed method is particularly
effective in solving classes of variational problems, specifically monotone
inclusion problems, commonly encountered in image processing tasks. The
Forward-Backward-Forward (FBF) algorithm is employed to address these problems,
offering a solution even when the Lipschitz constant of the neural network is
unknown. Notably, the FBF algorithm provides convergence guarantees under the
condition that the learned operator is monotone. Building on plug-and-play
methodologies, our objective is to apply these newly learned operators to
solving non-linear inverse problems. To achieve this, we initially formulate
the problem as a variational inclusion problem. Subsequently, we train a
monotone neural network to approximate an operator that may not inherently be
monotone. Leveraging the FBF algorithm, we then show simulation examples where
the non-linear inverse problem is successfully solved.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要