Bayesian anti-sparse coding

IEEE Transactions on Signal Processing(2017)

引用 8|浏览75
暂无评分
摘要
Sparse representations have proven their efficiency in solving a wide class of inverse problems encountered in signal and image processing. Conversely, enforcing the information to be spread uniformly over representation coefficients exhibits relevant properties in various applications such as robust encoding in digital communications. Antisparse regularization can be naturally expressed through an $\\ell _{\\infty }$-norm penalty. This paper derives a probabilistic formulation of such representations. A new probability distribution, referred to as the democratic prior, is first introduced. Its main properties as well as three random variate generators for this distribution are derived. Then this probability distribution is used as a prior to promote antisparsity in a Gaussian linear model, yielding a fully Bayesian formulation of antisparse coding. Two Markov chain Monte Carlo algorithms are proposed to generate samples according to the posterior distribution. The first one is a standard Gibbs sampler. The second one uses Metropolis–Hastings moves that exploit the proximity mapping of the log-posterior distribution. These samples are used to approximate maximum a posteriori and minimum mean square error estimators of both parameters and hyperparameters. Simulations on synthetic data illustrate the performances of the two proposed samplers, for both complete and over-complete dictionaries. All results are compared to the recent deterministic variational FITRA algorithm.
更多
查看译文
关键词
Bayes methods,Image coding,Encoding,Monte Carlo methods,Approximation algorithms,Signal processing algorithms,Standards
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要