Incorporating Handcrafted Filters in Convolutional Analysis Operator Learning for Ill-Posed Inverse Problems

2019 IEEE 8th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP)(2019)

Cited 2|Views9
No score
Abstract
Convolutional analysis operator learning (CAOL) enables the unsupervised training of convolutional sparsifying autoencoders, taking advantage of large datasets to obtain high quality filters. In previous works, using CAOL within model-based image reconstruction (MBIR) for ill-posed inverse problems significantly improved image reconstruction accuracy over existing MBIR using non-trained regularizers and generalized better than existing non-MBIR deep neural network approaches. This paper modifies the CAOL Procrustes filter update to allow some filters to be handcrafted. Doing so makes it possible to incorporate domain knowledge in the learning process and accelerates CAOL by learning fewer filters. We apply the proposed generalization of CAOL to MBIR for sparse-view CT. Numerical experiments show that 1) handcrafting discrete cosine transform filters can trade-off training time and reconstruction quality and 2) handcrafting filters based on finite differences can speed up training without sacrificing reconstruction quality.
More
Translated text
Key words
convolutional analysis operator learning,unsupervised training,convolutional sparsifying autoencoders,model-based image reconstruction,ill-posed inverse problems,learning process,nonMBIR deep neural network,CAOL Procrustes filter,CAOL generalization,discrete cosine transform filters,sparse-view CT
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined