Learning Filter Bank Sparsifying Transforms
Data is said to follow the transform (or analysis) sparsity model if it becomes sparse when acted on by a linear operator called a sparsifying transform. Several algorithms have been designed to learn such a transform directly from data, and data-adaptive sparsifying transforms have demonstrated excellent performance in signal restoration tasks. Sparsifying transforms are typically learned using small sub-regions of data called patches, but these algorithms often ignore redundant information shared between neighboring patches. We show that many existing transform and analysis sparse representations can be viewed as filter banks, thus linking the local properties of patch-based model to the global properties of a convolutional model. We propose a new transform learning framework where the sparsifying transform is an undecimated perfect reconstruction filter bank. Unlike previous transform learning algorithms, the filter length can be chosen independently of the number of filter bank channels. Numerical results indicate filter bank sparsifying transforms outperform existing patch-based transform learning for image denoising while benefiting from additional flexibility in the design process.
READ FULL TEXT