HOME JOURNALS CONTACT

Journal of Software Engineering

Year: 2013 | Volume: 7 | Issue: 2 | Page No.: 77-85
DOI: 10.3923/jse.2013.77.85
Medical Image Fusion Using Guided Filtering and Pixel Screening Based Weight Averaging Scheme
Changtao He, Yaohui Qin, Guiqun Cao and Fangnian Lang

Abstract: Recently, image fusion has attracted a lot of interest in various areas. Presented is a novel image fusion study, called medical image fusion with guided filtering and pixel screening. First, source image is merged as subsequent filtering input image by weighted fusion. Then, filtering outcome is compressed by Dynamic Range Compression (DRC) to highlight edge information. Finally, exploiting a pixel screening strategy to consummate texture structure of fused result. Comparing the fusion results of weight-averaging, Discrete Wavelet Transform (DWT) and guided filtering output, the Mutual Information (MI) of the proposed study is the largest and fusion results are also very satisfactory in terms of edge and texture information. The comparison results show that this study has better performance over state-of-the-art fusion schemes in improving the quality of the fused image.

Fulltext PDF Fulltext HTML

How to cite this article
Changtao He, Yaohui Qin, Guiqun Cao and Fangnian Lang, 2013. Medical Image Fusion Using Guided Filtering and Pixel Screening Based Weight Averaging Scheme. Journal of Software Engineering, 7: 77-85.

Keywords: weighted fusion, dynamic range compression, Edge-preserving filtering and pixel screening

INTRODUCTION

In last decades, image fusion technique has been widely applied into a number of different areas including remote sensing (Choi et al., 2011; Luo et al., 2013) clinical investigation and disease diagnosis (Li et al., 2012; Porter et al., 2001; Wang and Ma, 2008) and military reconnaissance (Seng et al., 2013) Generally speaking, image fusion is the process of integrating information from two or more images of the same scene into a single image, and fused image will be more suitable for human and machine perception or further image processing tasks (Liang et al., 2012; Tang, 2004). Recently, image fusion technique has attracted a lot of interest of many researchers. In order to resolve some emerging fusion problems, the researchers presented a plenty of novel fusion scheme according to some specific types of imaging (Choi et al., 2011; Luo et al., 2013; Li et al., 2012; Liang et al., 2012).

Concerning image fusion, People have proposed many schemes. A most direct way is preset weighted fusion, which requires sufficient priori knowledge to obtain appropriate weight coefficients. However, false weight can lead to disastrous result to fused image. Moreover, Multi-resolution decomposition based fusion methods are also commonly used in many fusion problems such as Discrete Wavelet Transform (DWT) (Piella, 2003; Hamza et al., 2005), complex wavelet (Wan et al., 2009), curvelet (Li and Yang, 2008), contourlet (Zhang and Guo, 2009), pyramid transform (Toet, 1990) etc. The key technique of multiscale transforms lies in how to select appropriate decomposition levels and fusion rules, which also determines the final fusion quality and contains the greater subjectivity. Even so, the DWT-based fusion schemes are still attractive when massive volumes of image data are to be merged quickly (Rahman et al., 2010). In this instance, sparse representation based fusion method achieve Considerable development (Yang and Li, 2010). It is an approximate computation but, can effectively solve the problem of large amount of data by dictionary learning.

Nowadays, motivated by the idea of guided filtering (He et al., 2010), from the point of view of image filtering, the proposed fusion scheme is a fire-new fusion study. The fused image not only has perfect edge information, but has a clear texture structure. In this study, employing four groups of medical images as test images, fusion results demonstrate that the proposed scheme is very effective for medical image fusion.

Edge-preserving filtering and pixel screening based image fusion: Figure. 1 shows the steps involved in the proposed fusion scheme that consists of three key steps: (1) Weighted fusion of source image as input image for guided filtering, (2) Dynamic Range Compression (DRC) of filtered result and (3) Consummating texture information of fused image by setting a threshold to screen appropriate pixels. The core of this study is to use edge-preserving property of guided filtering and detailed information- restoring property of pixel screening.

Weighted fusion of source image: Defining the original input images are I1 and I2, preliminary weighted fusion result as the input image p of the next step. This process can be expressed as follows:

(1)

(2)

where, i and j are pixel indexes, p is the preliminary fusion result, ε is a regularization parameter to avoid denominator is zero.

Edge-preserving guided image filtering: The basic idea of the guided filtering is to solve a linear model, and obtain the corresponding linear coefficients. Actually, it is a generalized expression of bilateral filtering, its basic form can be expressed as follows.

Fig. 1: Flowchart of fusion process

The filtering output at a pixel i is expressed as a weighted average:

(3)

where i and j are pixel indexes. The filter kernel Wij is a function of the guidance image I and independent of p. This filter is linear with respect to p. Here, a key assumption of the guided filter is a local linear model between the guidance I and filter output q. Assuming that q is a linear transform of I in a window ωK centered at the pixel K:

(4)

This local linear model ensures that q has an edge only if I has an edge, because Δq = aΔI. To determine the linear coefficients, specifically, minimizing the following cost function in the window:

(5)

here, ε is a regularization parameter preventing aK from being too large. The solution to (5) can be given by linear regression:

(6)

(7)

here, μK and σ2k are the mean and variance of in ωk, |ω| is the number of pixels in ωk and:

is the mean of in ωk.

Applying the linear model to all local windows in the entire image. The output qi is an average value of all the possible values of qi. Through computing (ak, bk) for all patches ωk in the image, the filter output can be obtained by:

(8)

it is clear that (ai, bi) are the output of an average filter, their gradients should be much smaller than that of near I strong edges. In this situation, Δp≈ a ΔI meaning that abrupt intensity changes in I can be mostly maintained in q. This is the reason why guided filtering can better preserve image edge information. Moreover, W ij (I) from (3) can be explicitly expressed by (9), Σj Wij (I) = 1 (detailed deduction process may reference the supplementary materials in document (He et al., 2010):

(9)

in this study, the guidance image I and input image P are identical. A local window is 16x16 and ∈ = 0.01.

Dynamic range compression of filtering outcome: In order to avoid artifacts and enhance the detailed information in filter output, adopting the following function (10) to modify the aforementioned filtered result. The literature demonstrated applying DRC algorithm to the target field will not affect texture and structure information (Wang et al., 2006). Furthermore, compressed result can also avoid “ringing” phenomenon effectively. The detailed information about DRC can be seen in (Wang et al., 2006; Fattal et al., 2002):

(10)

the parameter ∈ is within (0, 1), it controls the strength of change for the qi. Here, ∈ is set to 0.8, and λ = 0.8. mean (|qi|).

Pixel screening: Guided filtering has the edge-preserving smoothing property. But, it ignores internal texture structure information. This is very disadvantageous for subsequent image analysis and application. In order to solve this problem, a simple strategy is to screen each pixel of output image qi by setting a threshold, its operation process can be explained in Fig. 2. A large number of tests show that threshold value = 0.5 (after pixel normalization) is suitable for medical image. In other words, it is an empirical value.

Fig. 2: Flowchart of pixel screening

Experimental results and discussion: Here the proposed fusion scheme is tested using several groups of medical images. The quality of the fused image is assessed by subjective Human Visual System (HVS) and an objective evaluation standard.

Experimental results and objective evaluation: Experiments are performed on four groups of 256-level images. Each group has a pair of medical images from different sources. In order to show the advantages of the proposed scheme, it is compared with other schemes including DWT (DBSS (2, 2) is the wavelet basis), Laplacian pyramid, output result after guided filtering and weighted fusion.

Images in group 1 are acquired from the same position in the brain using different devices. Image I1 shown in Fig. 3a is a CT image that shows structures of bone, while image I2 shown in Fig. 3b is an MR image that shows areas of soft tissue. Figure. 3 shows the final fusion results with different fusion methods. With regard to group 2 shown in Fig. 4, Fig. 4a is a CT image and Fig. 4b is an MR image. Fused images are shown in Fig. 4cf.

Fig. 3(a-f): The first group of source images and fusion results using different schemes: (a-b): Source images and (c-f): Fused images with different fusion schemes, DWT: Discrete wavelet transform

Fig. 4(a-f): The second group of source images and fusion results using different schemes: (a-b): Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform

Group 3 shown in Fig. 5 and group 4 shown in Fig. 6. Figure.5a and b and 6a and b come from the website of the Atlas project, which is made possible in part by the Departments of Radiology and Neurology at Brigham and Women’s Hospital, Harvard Medical School, the Countway Library of Medicine and the American Academy of Neurology [http://www.med.harvard.edu/AANLIB].

For the sake of objectively assessing various fusion results, Mutual Information (MI) is exploited as the objective standard to estimate the performance of the different fusion schemes. In fact, the goal of fusion is that creating a fused image that acquires as much information from each of the source images as possible. The more information obtained from source images, the better the effect of fusion there is. MI can exactly measure the performance of different schemes. In other words, the larger MI value is, the better fusion result is, Table 1 shows the evaluation results.

Fig. 5(a-f): The third group of source images and fusion results using different schemes: (a-b) Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform

Fig. 6(a-f): The fourth group of source images and fusion results using different schemes: (a-b) Source images, (c-f) Fused images with different fusion schemes, DWT: Discrete wavelet transform

Table 1: Mutual information of fused images in different groups
Footnote: DWT: Discrete wavelet transform

Analyses and discussion: Concerning medical image fusion, many schemes have been developed in last decade. DWT-based fusion scheme are probably the most popular one (Li et al., 1995; Chipman et al., 1995; Wang et al., 2003; Li et al., 2003; Chen and Di, 2004) and its fusion result outperform many other results using different fusion schemes. Therefore, DWT-based fusion is a benchmark. In this study, comparing the proposed scheme with DWT-based fusion can demonstrate the effectiveness and robustness of this study.

For the four image sets, the corresponding fusion results are given in Fig. 3-6, respectively. It can be easily found that image based on weighted fusion method reduces the contrast of features uniquely presented in either of the source images. In Fig. 3d the fusion results of DWT are almost anamorphic images. Although the fusion quality of the rear two groups with DWT have been improved in a certain extent, only few significant characteristics are inherited from source images, which is not enough for the subsequent clinical diagnosis. Likewise, the results of guided filter output from Fig. 3-6 are also unacceptable, which ignores a lot of texture information and strengthens many false contours. Undoubtedly, from the point of visual effect, the best image fusion results are obtained by applying the proposed fusion scheme. It is clear that the feature and detailed information presented in Fig. 3c-6c are much richer than other fused images. The image contents like tissues are clearly enhanced. Other useful information like brain boundaries and shape are almost perfectly preserved.

In addition to visual analysis, a quantitative analysis is also conducted using MI as evaluation criterion. From Table 1, in group 1, 2 and 4, the MI value is 5.1273, 3.3597 and 3.7059. It is clear that the MI value is the largest using the proposed fusion scheme. Although in group 3, the result of guided filter output is a little better than the proposed scheme only in the numerical aspect, the result of guided filter output contains some false contours. At the same time, it also ignores the texture and structure information of the soft tissues. Comparatively speaking, the proposed scheme is unique in both of subjective and objective evaluations.

CONCLUSION

With the development of medical imaging technology, more and more medical image is available. But, single modal image cannot meet the needs of clinical applications. Therefore, medical image fusion plays an important role in clinical disease diagnosis. The proposed medical image fusion scheme takes account of the edge and texture information in the fused image at the same time. Study shows that it is effective and suitable for solving multimodal medical image fusion problem.

ACKNOWLEDGMENT

This paper was supported by postdoctoral of China (20100471665).

REFERENCES

  • Choi, J., K. Yu and Y. Kim, 2011. A new adaptive component-substitution-based satellite image fusion by using partial replacement. IEEE Trans. Geosci. Remote Sens., 49: 295-309.
    CrossRef    Direct Link    


  • Luo, B., M.M. Khan, T. Bienvenu, J. Chanussot and L. Zhang, 2013. Decision-based fusion for pansharpening of remote sensing images. IEEE Geosci. Remote Sens. Lett., 10: 19-23.
    CrossRef    Direct Link    


  • Li, S., H. Yin and L. Fang, 2012. Group-sparse representation with dictionary learning for medical image denoising and fusion. IEEE Trans. Biomed. Eng., 59: 3450-3459.
    CrossRef    Direct Link    


  • Porter, B.C., D.J. Rubens, J.G. Strang, J. Smith, S. Totterman and K.J. Parker, 2001. Three-dimensional registration and fusion of ultrasound and MRI using major vessels as fiducial markers. IEEE Trans. Med. Imaging, 20: 354-359.
    CrossRef    Direct Link    


  • Wang, Z. and Y. Ma, 2008. Medical image fusion using m-PCNN. Infor. Fusion, 9: 176-185.
    CrossRef    


  • Seng, C.H., A. Bouzerdoum, M.G. Amin and S.L. Phung, 2013. Two-stage fuzzy fusion with applications to through-the-wall radar imaging. IEEE Geosci. Remote Sens. Lett., 10: 687-691.
    CrossRef    Direct Link    


  • Liang, J., Y. He, D. Liu and X. Zeng, 2012. Image fusion using higher order singular value decomposition. IEEE Trans Image Process., 21: 2898-2909.
    CrossRef    Direct Link    


  • Tang, J., 2004. A contrast based image fusion technique in the DCT domain. Digit. Signal Process., 14: 218-226.
    CrossRef    Direct Link    


  • Rahman, S.M.M., M.O. Ahmad and M.N.S. Swamy, 2010. Contrast-based fusion of noisy images using discrete wavelet transform. IET Image Process., 4: 374-384.
    CrossRef    Direct Link    


  • Piella, G., 2003. A general framework for multiresolution image fusion: From pixels to regions. Inform. Fusion, 4: 259-280.
    CrossRef    Direct Link    


  • Hamza, A.B., Y. He, H. Krim and A. Willsky, 2005. A multiscale approach to pixel-level image fusion. Integr. Computer-Aided Eng., 12: 135-146.
    Direct Link    


  • Wan, T., N. Canagarajah and A. Achim, 2009. Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients. IEEE Trans. Multimed, 11: 624-633.
    CrossRef    Direct Link    


  • Li, S. and B. Yang, 2008. Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recognit. Lett., 29: 1295-1301.
    CrossRef    Direct Link    


  • Zhang, Q. and B.L. Guo, 2009. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process., 89: 1334-1346.
    CrossRef    


  • Toet, A., 1990. Hierarchical image fusion. Mach. Vision Appl., 3: 1-11.
    CrossRef    Direct Link    


  • Yang, B. and S. Li, 2010. Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas., 59: 884-892.
    CrossRef    Direct Link    


  • He, K., J. Sun and X. Tang, 2010. Guided image filtering. Proceedings of the 11th European Conference on Computer Vision: Part I, September 5-11, 2010, Heraklion, Crete, Greece, pp: 1-14.


  • Wang, C., Q. Yang, X. Tang and Z. Ye, 2006. Salience preserving image fusion with dynamic range compression. Proceedings of the IEEE International Conference on Image Processing, October 8-11, 2006, Atlanta, GA., pp: 989-992.


  • Fattal, R., D. Lischinski and M. Werman, 2002. Gradient domain high dynamic range compression. Proceedings of the ACM SIGGRAPH, July 2002, New York, PP: 249-256.


  • Li, H., B.S. Manjunath and S.K. Mitra, 1995. Multisensor image fusion using the wavelet transform. Graphic. Models Image Process., 57: 235-245.
    CrossRef    Direct Link    


  • Chipman, L.J., T.M. Orr and L.N. Graham, 1995. Wavelets and image fusion. Proceedings of the International Conference on Image Processing, Volume 3, October 23-26, 1995, Washington, DC., USA., pp: 248-251.


  • Wang, W.W., P.L. Shui and G.X. Song, 2003. Multifocus image fusion in wavelet domain. Proceedings of the International Conference on Machine Learning and Cybernetics, Volume 5, November 2-5, 2003, China, pp: 2887-2890.


  • Li, Z., Z. Jing, G. Liu, S. Sun and H. Leung, 2003. Pixel visibility based multifocus image fusion. Proceedings of the International Conference on Neural Networks and Signal Processing, Volume 2, December 14-17, 2003, Nanjing, China, pp: 1050-1053.


  • Chen, M. and H. Di, 2004. Study on optimal wavelet decomposition level for multi-focus image fusion. Opto Electron. Eng., 3: 64-67.

  • © Science Alert. All Rights Reserved