Robust And Fast Super Resolution
What is Super Resolution ?
Super resolution is the ability to use multiple noisy and blurred images obtained by low (and usually cheap) resolution camera and together generating a higher resolution image with greater details than those you could obtain with just a single image.
The goals of this project
The goal of this project was to implement and evaluate a method for super-resolution estimation. This method should be robust and fast. In robust we mean that it is resilient as possible to outliers. Another goal was to present a simple graphical user interface which assists the user in evaluating the super-resolution algorithm and helps him choose the different parameters.
More details on the
algorithm and its implementation
First one should know about the limitations of any super-resolution algorithm. Super-resolution cannot be achieve when no aliasing exists in the images. The great paradox in super-resolution is that the better camera you have, the lesser chances are you will be able to obtain more information using the SR techniques. This is due to the Nyquist theorem which in short means that is you sampled the image using a frequency which is more than double the highest frequency in the sampled data, all the information is contained in the sample, and it can be (theoretically) perfectly reconstructed. That is, all the information resides in the sample. What we are trying to say here, is that if the camera which was used to obtain the low resolution images, is a good camera, and has a good anti-aliasing filter, we shouldn't expect too much from the SR algorithm.
The main idea behind any spatial based SR algorithm, is to use multiple LR (low resolution) noisy and blued images, obtained from different (sub-pixel) spatial locations and trying to estimate a HR (high resolution) image which minimizes some cost of the difference between the projection of this HR image to the coordinates of each of the LR images. Usually this problem is ill-posed and additional prior information (such as smoothness of the image) is needed to generate a stable solution.
In this work we implemented the two methods for robust super resolution suggested in the work of Farsiu, Robinson, Elad, and Milanfar: "Fast and Robust Multiframe Super Resolution". In the above paper, an L1 norm is suggested as the cost of the estimation process. This cost function is proven to be robust and has a breakdown factor of 50%. In addition, they suggest a regularization term based on the bilateral-filter which induces a piece-wise smooth image, thus producing an edge preserving algorithm.
The SR algorithm assumes that the geometric motion between all images is known. For this to be true, all images should be registrated to each other. In this work, we implemented a registration algorithm based on the pyramidal Lukas-Kanade optical flow algorithm as presented in the work of Jean-Yves Bouguet: . The algorithm presented in Bouguet's paper was adapted to estimate the translation between entire images instead of small windows, thus our registration algorithm should be used for images which contain only translation motion between them. This restriction is any how the base for the fast implementation of the SR algorithm in Farsiu's work.
First you should download the Matlab code from here.
Unzip the file and open a Matlab session in the destination folder. From the Matlab console, run SRDemo.m.
The following figure presents the window presented to the user:
In the GUI, there are two image boxes. The left one shows a single frame from the low resolution image sequence. The right image box, shows the high resolution result obtained from the SR algorithm.
The "Load Movie" button allows the user to select a LR image sequence and load it into the GUI. The image sequence is represented as a 3d array where the 3rd dimension is the frame sequence and the first two dimensions are of the size of the LR image. Most of the image sequences tested in the project were taken from the MDSP Web page. Others exist there and may be downloaded as well. Additional sequences were generated synthetically and a bunch of data files are attached with this project under the DATA directory.
Under the LR image box, the two buttons: "Show Next" and "Show Previous" allow the user to scan through the LR image sequence and view the different frames. "Save LR" allows the user to save the currently viewed low resolution image to a file.
The "Register Movie" button performs the "Lukas-Kanade optical flow" algorithm and registrates all frames to the reference frame which is the first image in the sequence.
In the "SR Estimation Type" box, the user can select between the different implementations of the SR algorithm. Note that choice of "Robust SR (II-D)" is disabled until one of the other algorithms is run. This is because this implementation needs an initial estimate of the HR image. The output of the previous run of the SR algorithm is used as the input estimate for this one. You can either use a simple cubic spline interpolation as an initial estimate, or the output of the "Fast Robust SR (II-E)" algorithm.
The cubic-spline method is a simple method using interp2 of the reference image.
The "Robust SR (II-D)" is discussed in details in section II-D in the work of Farsiu.
The "Fast Robust SR (II-E)" method is discussed in section II-E and is a faster version of "Robust SR (II-D)" which works only when the motion and blurring operations commute.
The "Registration Type" box is currently disabled. The idea is to enable an affine motion estimator which required a more complex implementation of the SR algorithms and was out of the scope of the project. Note that in the attached matlab code, there exists an implementation of an extension to the LK optical flow algorithm which estimates affine motion. Hence, just the SR implementations need to be updated if future needs of affine motion.
The "Super Resolution Parameters" box holds all the parameters needed for the SR algorithm. The "Robust SR (II-D)" algorithm solves the following term:
The above term is solved using iterations in the steepest descent:
The text-boxes in the "Super Resolution Parameters" correspond to the parameters in the above expressions. The "Iterations" text box defines the number of SD iterations performed by the algorithm. The "Fast Robust SR (II-E)" has similar terms and uses the same parameters.
The "resolution factor" text box defines the scaling of the HR image. The "PSF Kernel Size" and "PSF Sigma" define the parameters of the Gaussian blurring kernel.
"Compute SR" button runs the selected SR algorithm and displays the result. You can toggle between the previous SR output and the current one using the check-box on top of the image. This can be used to view the difference between two algorithms.
"Clear Image" enables you to clear the SR image.
"Save HR" enables you to save the high resolution image to a
The Super-Resolution algorithms were run on a set of examples downloaded from MDSP and a few which are generated synthetically. The following images depict the results of the algorithms:
On the left, you can see the low resolution image of lena (This is actually a synthetic image generated by blurring the original lena, down sampling and adding white noise). On the right you can see the result of the Fast and Robust super resolution algorithm.
The alpaca sequence was generated from a real camera (downloaded from MDSP). On the left are two low resolution images. The special case here is that in addition to the normal translational motion of the camera, outlier images were introduces by moving the alpaca. On the right you can see the result of the Robust super-resolution algorithm which managed to cope with this outliers and generate quite an impressive HR image.
Emily too was downloaded from MDSP and created from a real camera. On the left you can see the original LR image. In the center you can see the result of the robust estimation. On the right you can see the result of the fast and robust implementation. In this case the HR image is a factor 4 improvement!!!.
The text sequence was also downloaded from MDSP and created from a real camera. The improvement here is of factor too, and it should be noted how the edges are preserved by the algorithm.