Efficient Self-learning for Single Image Upsampling

Date issued

2014

Journal Title

Journal ISSN

Volume Title

Publisher

Václav Skala - UNION Agency

Abstract

Exploiting similarity of patches within multiple resolution versions of an image is often utilized to solve many vision problems. Particularly, for image upsampling, recently, there has been a slew of algorithms exploiting patch repetitions within- and across- different scales of an image, along with some priors to preserve the scene structure of the reconstructed image. One such method, self-learning algorithm [1], uses only one image to achieve high magnification factors. But, as the image resolution increases, the number of patches in dictionary increases dramatically, and makes the reconstruction computationally prohibitive. In this paper, we propose a method that removes the redundancies inherent in large self-learned dictionaries to upsample an image without using any regularization methods or priors, and drastically reduces time complexity. We further prove that any low-variance (low details) patch that does not find any match can be represented as a linear combination of only low-variance patches from dictionary. The same principle applies to high-variance (high details) patches. Images with high scaling factors can be obtained with this method without any regularization or prior information, which can be subjected to further regularization with necessary prior(s) to refine the reconstruction.

Description

Subject(s)

sebeučení, převzorkování obrazu, super rozlišení, slovníkové učení

Citation

WSCG 2014: Full Papers Proceedings: 22nd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision in co-operation with EUROGRAPHICS Association, p. 1-8.