Abstract: Recently, image fusion has attracted a lot of interest in various areas. Presented is a novel image fusion study, called medical image fusion with guided filtering and pixel screening. First, source image is merged as subsequent filtering input image by weighted fusion. Then, filtering outcome is compressed by Dynamic Range Compression (DRC) to highlight edge information. Finally, exploiting a pixel screening strategy to consummate texture structure of fused result. Comparing the fusion results of weight-averaging, Discrete Wavelet Transform (DWT) and guided filtering output, the Mutual Information (MI) of the proposed study is the largest and fusion results are also very satisfactory in terms of edge and texture information. The comparison results show that this study has better performance over state-of-the-art fusion schemes in improving the quality of the fused image.
INTRODUCTION
In last decades, image fusion technique has been widely applied into a number of different areas including remote sensing (Choi et al., 2011; Luo et al., 2013) clinical investigation and disease diagnosis (Li et al., 2012; Porter et al., 2001; Wang and Ma, 2008) and military reconnaissance (Seng et al., 2013) Generally speaking, image fusion is the process of integrating information from two or more images of the same scene into a single image, and fused image will be more suitable for human and machine perception or further image processing tasks (Liang et al., 2012; Tang, 2004). Recently, image fusion technique has attracted a lot of interest of many researchers. In order to resolve some emerging fusion problems, the researchers presented a plenty of novel fusion scheme according to some specific types of imaging (Choi et al., 2011; Luo et al., 2013; Li et al., 2012; Liang et al., 2012).
Concerning image fusion, People have proposed many schemes. A most direct way is preset weighted fusion, which requires sufficient priori knowledge to obtain appropriate weight coefficients. However, false weight can lead to disastrous result to fused image. Moreover, Multi-resolution decomposition based fusion methods are also commonly used in many fusion problems such as Discrete Wavelet Transform (DWT) (Piella, 2003; Hamza et al., 2005), complex wavelet (Wan et al., 2009), curvelet (Li and Yang, 2008), contourlet (Zhang and Guo, 2009), pyramid transform (Toet, 1990) etc. The key technique of multiscale transforms lies in how to select appropriate decomposition levels and fusion rules, which also determines the final fusion quality and contains the greater subjectivity. Even so, the DWT-based fusion schemes are still attractive when massive volumes of image data are to be merged quickly (Rahman et al., 2010). In this instance, sparse representation based fusion method achieve Considerable development (Yang and Li, 2010). It is an approximate computation but, can effectively solve the problem of large amount of data by dictionary learning.
Nowadays, motivated by the idea of guided filtering (He et al., 2010), from the point of view of image filtering, the proposed fusion scheme is a fire-new fusion study. The fused image not only has perfect edge information, but has a clear texture structure. In this study, employing four groups of medical images as test images, fusion results demonstrate that the proposed scheme is very effective for medical image fusion.
Edge-preserving filtering and pixel screening based image fusion: Figure. 1 shows the steps involved in the proposed fusion scheme that consists of three key steps: (1) Weighted fusion of source image as input image for guided filtering, (2) Dynamic Range Compression (DRC) of filtered result and (3) Consummating texture information of fused image by setting a threshold to screen appropriate pixels. The core of this study is to use edge-preserving property of guided filtering and detailed information- restoring property of pixel screening.
Weighted fusion of source image: Defining the original input images are I1 and I2, preliminary weighted fusion result as the input image p of the next step. This process can be expressed as follows:
(1) |
(2) |
where, i and j are pixel indexes, p is the preliminary fusion result, ε is a regularization parameter to avoid denominator is zero.
Edge-preserving guided image filtering: The basic idea of the guided filtering is to solve a linear model, and obtain the corresponding linear coefficients. Actually, it is a generalized expression of bilateral filtering, its basic form can be expressed as follows.
Fig. 1: | Flowchart of fusion process |
The filtering output at a pixel i is expressed as a weighted average:
(3) |
where i and j are pixel indexes. The filter kernel Wij is a function of the guidance image I and independent of p. This filter is linear with respect to p. Here, a key assumption of the guided filter is a local linear model between the guidance I and filter output q. Assuming that q is a linear transform of I in a window ωK centered at the pixel K:
(4) |
This local linear model ensures that q has an edge only if I has an edge, because Δq = aΔI. To determine the linear coefficients, specifically, minimizing the following cost function in the window:
(5) |
here, ε is a regularization parameter preventing aK from being too large. The solution to (5) can be given by linear regression:
(6) |
(7) |
here, μK and σ2k are the mean and variance of in ωk, |ω| is the number of pixels in ωk and:
is the mean of in ωk.
Applying the linear model to all local windows in the entire image. The output qi is an average value of all the possible values of qi. Through computing (ak, bk) for all patches ωk in the image, the filter output can be obtained by:
(8) |
it is clear that (ai, bi) are the output of an average filter, their gradients should be much smaller than that of near I strong edges. In this situation, Δp≈ a ΔI meaning that abrupt intensity changes in I can be mostly maintained in q. This is the reason why guided filtering can better preserve image edge information. Moreover, W ij (I) from (3) can be explicitly expressed by (9), Σj Wij (I) = 1 (detailed deduction process may reference the supplementary materials in document (He et al., 2010):
(9) |
in this study, the guidance image I and input image P are identical. A local window is 16x16 and ∈ = 0.01.
Dynamic range compression of filtering outcome: In order to avoid artifacts and enhance the detailed information in filter output, adopting the following function (10) to modify the aforementioned filtered result. The literature demonstrated applying DRC algorithm to the target field will not affect texture and structure information (Wang et al., 2006). Furthermore, compressed result can also avoid ringing phenomenon effectively. The detailed information about DRC can be seen in (Wang et al., 2006; Fattal et al., 2002):
(10) |
the parameter ∈ is within (0, 1), it controls the strength of change for the qi. Here, ∈ is set to 0.8, and λ = 0.8. mean (|qi|).
Pixel screening: Guided filtering has the edge-preserving smoothing property. But, it ignores internal texture structure information. This is very disadvantageous for subsequent image analysis and application. In order to solve this problem, a simple strategy is to screen each pixel of output image qi by setting a threshold, its operation process can be explained in Fig. 2. A large number of tests show that threshold value = 0.5 (after pixel normalization) is suitable for medical image. In other words, it is an empirical value.
Fig. 2: | Flowchart of pixel screening |
Experimental results and discussion: Here the proposed fusion scheme is tested using several groups of medical images. The quality of the fused image is assessed by subjective Human Visual System (HVS) and an objective evaluation standard.
Experimental results and objective evaluation: Experiments are performed on four groups of 256-level images. Each group has a pair of medical images from different sources. In order to show the advantages of the proposed scheme, it is compared with other schemes including DWT (DBSS (2, 2) is the wavelet basis), Laplacian pyramid, output result after guided filtering and weighted fusion.
Images in group 1 are acquired from the same position in the brain using different devices. Image I1 shown in Fig. 3a is a CT image that shows structures of bone, while image I2 shown in Fig. 3b is an MR image that shows areas of soft tissue. Figure. 3 shows the final fusion results with different fusion methods. With regard to group 2 shown in Fig. 4, Fig. 4a is a CT image and Fig. 4b is an MR image. Fused images are shown in Fig. 4cf.
Fig. 3(a-f): | The first group of source images and fusion results using different schemes: (a-b): Source images and (c-f): Fused images with different fusion schemes, DWT: Discrete wavelet transform |
Fig. 4(a-f): | The second group of source images and fusion results using different schemes: (a-b): Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform |
Group 3 shown in Fig. 5 and group 4 shown in Fig. 6. Figure.5a and b and 6a and b come from the website of the Atlas project, which is made possible in part by the Departments of Radiology and Neurology at Brigham and Womens Hospital, Harvard Medical School, the Countway Library of Medicine and the American Academy of Neurology [http://www.med.harvard.edu/AANLIB].
For the sake of objectively assessing various fusion results, Mutual Information (MI) is exploited as the objective standard to estimate the performance of the different fusion schemes. In fact, the goal of fusion is that creating a fused image that acquires as much information from each of the source images as possible. The more information obtained from source images, the better the effect of fusion there is. MI can exactly measure the performance of different schemes. In other words, the larger MI value is, the better fusion result is, Table 1 shows the evaluation results.
Fig. 5(a-f): | The third group of source images and fusion results using different schemes: (a-b) Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform |
Fig. 6(a-f): | The fourth group of source images and fusion results using different schemes: (a-b) Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform |
Table 1: | Mutual information of fused images in different groups |
Footnote: DWT: Discrete wavelet transform |
Analyses and discussion: Concerning medical image fusion, many schemes have been developed in last decade. DWT-based fusion scheme are probably the most popular one (Li et al., 1995; Chipman et al., 1995; Wang et al., 2003; Li et al., 2003; Chen and Di, 2004) and its fusion result outperform many other results using different fusion schemes. Therefore, DWT-based fusion is a benchmark. In this study, comparing the proposed scheme with DWT-based fusion can demonstrate the effectiveness and robustness of this study.
For the four image sets, the corresponding fusion results are given in Fig. 3-6, respectively. It can be easily found that image based on weighted fusion method reduces the contrast of features uniquely presented in either of the source images. In Fig. 3d the fusion results of DWT are almost anamorphic images. Although the fusion quality of the rear two groups with DWT have been improved in a certain extent, only few significant characteristics are inherited from source images, which is not enough for the subsequent clinical diagnosis. Likewise, the results of guided filter output from Fig. 3-6 are also unacceptable, which ignores a lot of texture information and strengthens many false contours. Undoubtedly, from the point of visual effect, the best image fusion results are obtained by applying the proposed fusion scheme. It is clear that the feature and detailed information presented in Fig. 3c-6c are much richer than other fused images. The image contents like tissues are clearly enhanced. Other useful information like brain boundaries and shape are almost perfectly preserved.
In addition to visual analysis, a quantitative analysis is also conducted using MI as evaluation criterion. From Table 1, in group 1, 2 and 4, the MI value is 5.1273, 3.3597 and 3.7059. It is clear that the MI value is the largest using the proposed fusion scheme. Although in group 3, the result of guided filter output is a little better than the proposed scheme only in the numerical aspect, the result of guided filter output contains some false contours. At the same time, it also ignores the texture and structure information of the soft tissues. Comparatively speaking, the proposed scheme is unique in both of subjective and objective evaluations.
CONCLUSION
With the development of medical imaging technology, more and more medical image is available. But, single modal image cannot meet the needs of clinical applications. Therefore, medical image fusion plays an important role in clinical disease diagnosis. The proposed medical image fusion scheme takes account of the edge and texture information in the fused image at the same time. Study shows that it is effective and suitable for solving multimodal medical image fusion problem.
ACKNOWLEDGMENT
This paper was supported by postdoctoral of China (20100471665).