Sparse Bayesian Inference with Regularized Gaussian Distributions

arxiv(2023)

引用 0|浏览7
暂无评分
摘要
Regularization is a common tool in variational inverse problems to impose assumptions on the parameters of the problem. One such assumption is sparsity, which is commonly promoted using lasso and total variation-like regularization. Although the solutions to many such regularized inverse problems can be considered as points of maximum probability of well-chosen posterior distributions, samples from these distributions are generally not sparse. In this paper, we present a framework for implicitly defining a probability distribution that combines the effects of sparsity imposing regularization with Gaussian distributions. Unlike continuous distributions, these implicit distributions can assign positive probability to sparse vectors. We study these regularized distributions for various regularization functions including total variation regularization and piecewise linear convex functions. We apply the developed theory to uncertainty quantification for Bayesian linear inverse problems and derive a Gibbs sampler for a Bayesian hierarchical model. To illustrate the difference between our sparsity-inducing framework and continuous distributions, we apply our framework to small-scale deblurring and computed tomography examples.
更多
查看译文
关键词
sparse bayesian inference,regularized gaussian distributions
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要