INTRODUCTION
Up to now, a large number of Super Resolution (SR) methods^{13} have been developed successfully and widely used in many application fields. However, there is very little research on SR from the perspective of biology. In the field of biology, visual hyperacuity is a similar phenomenon to the technic of SR that the acuity of perceiving stimulus detail is far better than the resolution of photoreceptors in retina^{4}. Although visual hyperacuity and SR both share the same objective, which is to increase the resolution of the real world, there is not much cross application between them, since SR is not originated from biology but from signal processing and is then applied to process the image.
In this study a mechanism of hyperacuity imaging system by incorporating the principle behind the human eye is presented. By analyzing the properties of the involuntary eyemovement, which is a key element in human hyperacuity phenomenon, a system that imitates the working mechanism of the human eye is built. There are mainly 2 steps in our system including raw data acquisition and High Resolution (HR) reconstruction. For the image acquisition part, key frames are selected according to blurring degree and subpixel motion parameters. For the latter procedure, a new reconstruction algorithm which incorporates the Majorization Minimization (MM) algorithm is presented. Up till now, there is no similar research that relates SR with biological hyperacuity.
MATERIALS AND METHODS
Visual hyperacuity: The visual acuity of human eyes is limited by the distance between two sense cells on the fovea to about 1 arcmin of visual angle, which means anything smaller than that^{5} should not be distinguished. However, human eyes are capable of resolving a displacement with less than 10 arcsec that is at least one third of the diameter of a foveal cone. This phenomenon is called hyperacuity and has given rise to a large number of psychophysical studies and several qualitative theories about perception as well as the underlying neuronal properties. The reason that the spatial resolution of the human eye is better than the traditional visual acuity is the different measure methods. Traditional visual acuity is used to describe the ability of the human eye to distinguish two object as separate. Due to the diffraction effect, the optical system of human eye is not able to reproduce the object sufficiently compact. Even for the smallest object, its light will spread on the fovea and occupy more than a dozen sense cells. According to Rayleigh criterion if two objects need to be seperated, these two objects should be apart from each other for at least half the width of the joint light distribution. Fig. 1a illustrates the difference between visual acuity and hyperacuity.
Hyperacuity, however, defines the visual capabilities that the detectable difference in separation has values lower than the resolution limit. No laws of physics are disobeyed in the hyperacuity phenomenon since human eye is not only a spatial system but also a spatiotemporal system.

Fig. 1(ab): 
(a) Illustration for visual acuity and hyperacuity. The relative location of two objects can be detected when the shift is much smaller than the one required to resolve them as separate and (b) Involuntary eyemovements: Tremor (curved lines) and microsaccades (straight lines) 

Fig. 2:  Hyperacuity mechanism 
Researchers maintain that hyperacuity would require eye movements to achieve the high spatial resolution. Eye movements are critical in stitching the visual perception of the world around us to a seemingly large continues scene. Microsaccades is the most important fixational eye movements, which are the unconscious motions of the eye made when fixation is needed after large eye movements. Figure 1b shows the concept of microsaccades. Their peak velocities as well as durations are related to microsaccades amplitudes parametrically. However, there is no correlations between changes in microsaccades amplitude and visual acuity.
Mechanism design: The principle of biological hyperacuity is strictly complied to construct the system. A motor building under the signal acquisition device is used to imitate the effect of microsaccades of the human eye. Figure 2 shows is the scheme of the proposed mechanism. The motor is able to provide horizontal and vertical motions for the sensor. Since the obtained motion is unpredictable, it can be considered as random, which is similar to the involuntary microsaccades of the human eye. The temporal resolution of the human eye lies between 15 and 35 ms and is bad at low and high speed, depending on the experimental conditions. In other words, appropriate velocity of movement is necessary for temporal offsets to be seen as spatial offsets. There is no mechanism for human eyes to control and adjust the speed of the involuntary microsaccades to a specific value. However, it is known that our eyes are continuously reaching the maximum speed several times per second to guarantee enough spatial offsets have been perceived. Figure 2 shows, the velocity of the motor continuously oscillates along xaxis and reach its peak value in a short time.
Human eyes have the ability to skip the visual scene perceived when the velocity exceeds the limiting interval. Thus, our system should have a similar mechanism to constrain the frame obtained by the sensor. The frame that meets the standards of appropriate spatial offsets is referred as the key frame. Here the blur degree of the image is used to determine if one image is obtained when the sensor is moving too fast. First, two blurring operators are defined as follows:
Then convolutions between the image and operators is calculated by:
where, f(x, y) represents the image function, f_{x}(x, y) represents the convolution between f(x, y) and K_{x}, f_{y}(x, y) represents the convolution between f(x, y) and K_{y}. The blur degree of the image S can be obtaind by:
where, ·_{2} is the l_{2} norm of a matrix. For all frames satisfying s>σ should be removed, where σ is a predefine blurring threshold. Note that image noise is not used as a factor determining if the frame should be considered as a key frame, since the noise models for all frames are assumed to be same.
The motion parameter among all frames should also satisfy a certain condition. Here feature based motion estimation method is utilized to estimate the motion parameters. Let P = {P_{i} = (x_{i}, y_{i}), i = 0, …., M1} where, P_{i} is the central point of ith image. Since there is no contribution to the super resolution reconstruction procedure if a frame contains integral pixel movement, P should be processed that all the integer parts are removed and keep only the fractional parts results in . Let D = {d_{i,j}, i = 0, …, M1; j = 0, …, M1; i≠j} represents the distance between any two of the points in P’, where d_{i,j} is calculated by Euler distance:
where, . 0≤d_{i,j}≤0.5 reflects the relative distance between two images in subpixel level and larger d_{i,j} means shorter distance. The d_{i,j} = 0.5 means two images are completely overlapped in subpixel level. The index of key frame can be obtained by:
Super resolution reconstruction: Traditionally, regularization has been described from both the algebraic and statistical perspectives. Using regularization techniques, the desired HR image can be solved by minimizing the function:
where, y is the vectorized version of the LR image and x is the desired HR image and λ is a constant controlling the strength of Γ(x) and Γ(x) denoting the Total Variation (TV) term^{6} is a variable introduced to regularize the SR problem:
Then the reconstructed image can be obtained by minimizing L(x):
In most situations, it is impossible to estimate the highresolution image by direct minimizing Eq. 8, because the highfrequency errors are enhanced in this estimate. Majorization Minimization (MM) method^{7} is one of the most popular strategies to solve the nonlinear problem in Eq. 8. The MM algorithm is of the following form:
where, x^{(m)} is the HR image in mth iteration. For any given x and x^{(m)}, G(xx^{(m)})≥L(x) and only when x = x^{(m)}, G(xx^{(m)}) equals L(x), which means G(xx^{(m)}) is the upper boundary function of L(x) and touch the upper bound only when x = x^{(m)}. With the above assumption, L(x) and G(xx^{(m)}) satisfy the following relation:
To apply MM algorithm to our problem, consider the following in Eq. 11:
where, a≥0, b>0. Let and:
and substitute a, b into Eq. 6:
The above in Eq. 12 is equivalent to:
Equation 68 can be write as:
In addition, since is irrelevant with x, can be obtained by:

Fig. 3:  Flowchart of MM reconstruction procedure 
Let , is defined as follows:
So can be expressed as:
Equation 17 can be written in matrix form as follows:
Where:
and. can be obtained by replacing Γ(x) with :
where, C_{3} is a constant unrelated to x. Since, is a quadratic function, the minimization problem can be converted to the following linear system:
The flowchart of the reconstruction procedure is shown in Fig. 3.
RESULTS AND DISCUSSION
In the experiment, two datasets, each contains 12 LR images are used to compare the proposed algorithm with Fast and robust SR^{8}, TV regularization SR^{9} and variational Bayesian SR^{10}.
In order to present the performance of proposed system more comprehensively, original LR images interpolated by nearest neighbor and bicubic are shown in Fig. 4a and b, respectively. Reconstruct results by using fast and robust SR, TV regularization SR and variational Bayesian SR, proposed method are shown in Fig. 4cf, respectively. As can be seen from the figure that fast and robust is able to increase the image resolution with more details, however, it cannot avoid ringing artifacts at boundaries with discontinuity of the blur map. TV regularization SR and variational Bayesian SR do not have the ring artifacts, however, compare with the proposed method shown in Fig. 4f, the proposed method successfully restores more details with minimized artifacts. Results of the second dataset are shown in Fig. 5 and the input image is magnified two times by the proposed SR algorithm and other methods as shown in Fig. 5cf.
All experiments have been run on an Intel Core i7 2600 3.40 GHz processor. Proposed algorithm is able to upscale an image sequence of 12 frames sized 60×80 to 120×160 in less than 30 sec on average. So the computational complexity of the solution is not high.
Proposed method is able to obtain a more detailed HR image than other algorithms is because,1: The mechanism is able to remove the nonkey frames by calculating blur degree and relative subpixel parameter, 2: The MM algorithm is used to optimize the TV SR model in the reconstruction procedure. Compared with the proposed algorithm, both of the interpolation algorithms can only increase the amount of pixels without introducing more real information to the final HR image. Although fast and robust SR, TV regularization SR and variational Bayesian SR is able to increase the image resolution with more details, they do not have a mechanism of removing the nonkey frame and also, the reconstruction procedure is not ideal compared with the proposed method. Through the comparison of the methods, proposed method provides obvious good subjective visual quality with rich textures and sharp edges and the increase in resolution and image quality is evident. In practice, the resolution increasing is also determined by optical distortion, subpixel registration accuracy and degree of redundancy.

Fig. 4(af): 
An example of HR reconstruction results with different SR algorithms, (a) Original LR image interpolated by NN, (b) Original LR image interpolated by bicubic, (c) Fast and robust SR, (d) TV regularization SR, (e) Variational Bayesian SR and (f) Proposed SR 

Fig. 5(af): 
An example of HR reconstruction results with different SR algorithms, (a) Original LR image interpolated by NN (PSNR = 22.74, SSIM = 0.62), (b) Original LR image interpolated by bicubic (PSNR = 23.33, SSIM = 0.67), (c) Fast and robust SR (PSNR = 24.51, SSIM = 0.79), (d) TV regularization SR (PSNR = 25.70, SSIM = 0.84), (e) Variational Bayesian SR (PSNR = 25.82, SSIM = 0.83) and (f) Proposed SR (PSNR = 28.89, SSIM = 0.92) 
CONCLUSION
Visual hyperacuity is a similar phenomenon to the technic of SR. Although visual hyperacuity and SR both share the same objective, there is not much cross application between them. In this study a novel method enhancing the image resolution that relates the conventional super resolution with the biological visual hyperacuity is presented. By analyzing the properties of the involuntary eyemovement, which is a key element in human visual hyperacuity, an optical hyperacuity mechanism is proposed to construct a more detailed HR image from a sequence of LR images. The proposed mechanism is able to increase the image resolution obtained by a sensor built on a vibrating system beyond the Nyquist limit. Future work will include using other degradation model to describe the imaging process that can be used in SR reconstruction problem.
ACKNOWLEDGMENTS
This study was supported by National Natural Science Foundation of China (Grant No. 61171155 and 61571364).