Patch-dictionary method for whole image recovery

Background

Various algorithms have been proposed for dictionary learning such as KSVD^{[1]} and the online dictionary learning method^{[2]}. Among those for image processing, many use image patches to form dictionaries; see ^{[3]} for example, which uses patch-dictionary for image denoising.

We propose a new dictionary learning method by the accelerated alternating proximal gradient. We address a simple yet open issue regarding whole-image recovery: the large number of overlapping patches lead to a large number of free coefficients in the recovery, which can cause overfitting and slow computation. This issue has limited many patch-based methods to the “local” or “nearly local” kinds of image processing tasks, such as denoising, inpainting, deblurring, super-resolution, and compressive sensing in which the measurements encode the image patch by patch. For these tasks, one or a few patches can be processed or updated at a time, so the overfitting issue does not arise. We consider the more difficult “global” kind of task, such as compressive sensing and medical image recovery, where each of the measurements encodes the whole image and thus it is either impossible or very ineffective to process one or a few patches at a time.

Notation and our method

We shall recover an image mathbf{M} from its corrupted measurement mathbf{b}=mathcal{A}(mathbf{M})+mathbf{xi}, where mathcal{A} denotes a linear operator, and xi is some noise. Depending on the applications, mathcal{A} can take different forms.

Let mathcal{R}_{ij}(mathbf{M}) denote the (i,j)-th patch (mathcal{R}_{ij} is a linear operator) and assume its has an (approximately) sparse representation under a given dictionary mathbf{D}, i.e., mathcal{R}_{ij}(mathbf{M})=mathbf{D}mathbf{y}_{ij} where mathbf{y}_{ij} is an (approximately) sparse vector. When a set P of patches together covers every pixel of mathbf{M}, we can represent mathbf{M} by

 mathbf{M}=(mathcal{T}_P)^{-1}sum_{(i,j)in P}mathcal{R}_{ij}^top(mathbf{D}mathbf{y}_{ij}),quad mathcal{T_P}:=sum_{(i,j)in P}mathcal{R}_{ij}

where mathcal{R}_{ij}^top is the adjoint of mathcal{R}_{ij}, and (mathcal{T}_P)^{-1} is an operator that averages the overlapping patches.

Now, we take P as a set of covering but non-overlapping patches. Then mathcal{T}_P=mathcal{I}, the identify operator. Constructing such a P is equivalent to partitioning mathbf{M} into non-overlapping blocks, such as the following two partitions.

   

To recover mathbf{M} from mathbf{b}, we solve the following model

 mathop{mathrm{minimize}}_{mathbf{y}}sum_{(i,j)in P}|mathbf{w}_{ij}odotmathbf{y}_{ij}|_1+frac{1}{2nu}|mathcal{A}mathcal{T}_P^{-1}(sum_{(i,j)in P}mathcal{R}_{ij}^top(mathbf{D}mathbf{y}_{ij}))-mathbf{b}|_2^2,qquadqquad(*)

where mathbf{w}_{ij} is a weight vector. The model (*) can be solved by many convex optimization methods, for example, YALL1 in our numerical experiements.

When a set P of non-overlapping patches is used in (*), the solution sometimes bears the grid artifact. An effective strategy to avoid this artifact is to solve multiple instances of (*) with P's that arrange their patches in different ways, like the two examples above though we use up to five different arrangements, and then take the average of the solutions. Of course, the different instants of (*) can be solved in parallel.

It is important to use non-overlapping patches since this limits the number of free variables in (*) and improves solution quality (despite the possible grid artifact). If all the overlapping patches are used, there will be far more free variables.

Selected numerical results

   
 
 

Matlab codes

Download the code

Citation

Y. Xu and W. Yin. A fast patch-dictionary method for whole-image recovery. Inverse Problems and Imaging, 10(2), 563–583, 2016.

References

^{[1]}. M. Aharon, M. Elad, and A. Bruckstein. KSVD: an algorithm for designing overcomplete dictionaries for sparse representation, IEEE Transactions on Signal Processing, 54(2006), pp. 4311–4322.

^{[2]}. J. Mairal, F. Bach, J. Ponce, and G. Sapiro. Online dictionary learning for sparse coding, in Proceedings of the 26th Annual Iternational Conference on Machine Learning, ACM, 2009, pp. 689–696.

^{[3]}. M. Elad, and M. Aharon. Image denoising via sparse and redundant representations over learned dictionaries, IEEE Transactions on Image Processing, 15(2006), pp. 3736–3745.