Data fusion for images involves the combination of two or more images to form one image. The aim of such a fusion is to extract all the perceptually important features from all the original images and combine them to form a fused image in such a way that all the key features from each input image are still perceivable. The fusion of two or more images is often required for images captured using different instrument modalities, imaging modalities or camera settings of the same scene or objects. Important applications of the fusion of images include medical imaging, microscopic imaging, remote sensing, computer vision, and robotics.
Medical Image fusion can be defined as the process by which several medical images or some of their features are combined together to form a single image for better diagnosis and thereby a better treatment planning. As the details from different modalities are brought into a single image, the new image includes more comprehensive, more accurate, more stable information. So with the availability of multi modal tissue information on a single image, clinicians are provided the salient features of each medical imaging modality and thus eliminating their individual limitations.
For medical diagnosis Computed Tomography (CT) uses X-Ray on its way through the body to reconstruct a two dimensional image of the absorption coefficient within an axial slice. CT shows highly detailed anatomical information on the distribution of the absorption coefficient with high contrast in bone but little in soft tissue. Magnetic Resonance Imaging (MRI) uses nuclear spin interaction with the magnetic field and resonance phenomena to generate an image of the tissue of the human body with finer soft tissue details. But MRI does not show the bony structures clearly. Positron Emission Tomography (PET) shows physiological processes but little anatomical information. Likewise each and every medical imaging modality provides images, which have complementary information. Hence if a local integration of the complementary information is done, it provides more information for accurate diagnosis.
Also backing up of individual modality medical images in the memory doubles the computational power. If fused images are stored, it provides an effective way of reducing memory and the total amount of information presented without loss in image quality and content.
It is required that the fused image should preserve as closely as possible all relevant information obtained in the input images and the fusion process should not introduce any artifacts or inconsistencies, which can distract or mislead the medical professional, thereby a wrong diagnosis.
Respective anatomical structures are matched against each other to make the fusion meaningful. So that Image registration is the fundamental task of image fusion. Hence before image fusion, the multimodality medical images must be registered perfectly. Registration is a one to one mapping function applied to modify corresponding physical image points whose information can then be combined. Improper registration adds artifacts in the fused image and affects the importance of fusion. Hence only registered images are considered for fusion. Some of the image fusion methods have been introduced in the literatures including simple pixel by pixel averaging using SNR (Chungli, 1999), Laplacian pyramid method (Burt and Adelson, 1983), conditional probability networks (Kiverri and Cacee, 1998), Neural network methods (Aguilar and Garret, 2001).
Wavelet transform: In this study we applied 2D discrete wavelet transform is applied to the registered input medical images. The transform produces the coefficients, which will have high pass and low pass details. By applying fusion rules we produce the fusion output. The fusion rules is the averaging of the coefficients and the comparison of the coefficients of the wavelet transform.
In this study multi resolution transform (MRT) is used. In transform domain fusion, a transform is applied on the registered images to identify the vital details in the image. The technique is described in Fig. 1. Fusion rule is applied over the transform coefficients and fusion decision map is obtained. Inverse transform over the decision map and the fused image is reconstructed. This fused image will have details of both the source images.
There are other simple methods of fusion like simple addition of registered images, averaging of pixels etc., which do not guarantee proper fused output. On the other hand transform domain operations have several advantages over other methods. The advantages include energy compaction, larger SNR, collection of characteristic features, easy manipulations as transform coefficients are representatives of all the pixels of the image etc.
In 2D images we applied the wavelet transform and curvelets transform. The
wavelet transform exhibits time frequency localization and yields acceptable
|| Block diagram of transform domain image fusion
In the curvelets transform the edges and singularities are well represented. The drawback of the wavelet transform is that it suffers from limited directionality. The point singularity is better suited for wavelets in 1 dimensional signal, but 2 dimensional signals like images have curved or line singularities where wavelets fails to approximate. Wavelets in two dimensions are good at isolating the edges, but will not see the smoothness along the edges. In two dimensions, the discontinuities are spatially distributed and wavelets are not suitable for sparse nature of discontinuities. This nature of wavelets indicates that more powerful support is needed for the transform in two dimensions. Hence in order to achieve proper image representations, curvelets are introduced.
Fusion rules for wavelet transform: The wavelet coefficients created by
Discrete Wavelet Transform are processed by using Fusion Rule. The Fusion Rule
is varied such that it leads to creation of various Fusion Methods. The various
techniques developed and implemented for Image Fusion using wavelet transform
are as follows:
Fusion by average method: In this method, wavelet coefficients of the multi- modal images are computed and fusion is formed by the pixel-wise averaging of the constituting transformed images.
Fusion by weighted average method: Weighted average method is similar to Average method of Fusion but the wavelet coefficients are weighted. Wavelet coefficient of the multi- modal images are computed and fusion is performed by the pixel-wise weighted averaging of the constituting images. For instance, approximation coefficients of both input images are added. The resultant sum is multiplied by the random weight 0.2. These steps are repeated for all other corresponding wavelet coefficients. The weighted average method is tried for various random weights. The fused output is varying depending upon the weights.
Fusion by comparison of wavelet coefficients: In this method, the fusion rule applied to get the fused output is by comparing wavelet coefficients of the corresponding part of the transformed images and extracting only the higher coefficients. The higher coefficient denotes the maximum information in each of the four decomposed coefficients of the wavelet transform image.
Region based fusion method: In this method the 1st and 2nd level decomposition of Wavelet Transform of the input images are performed. The region is selected by setting threshold (T>0.5), for both the levels of decomposition of wavelet coefficients. The higher coefficients are extracted by comparing the threshold wavelet coefficients of the corresponding levels.
Applying inverse wavelet transform to the extracted wavelet coefficients forms the resultant fused image.
Discrete curvelet transform: The curvelet transform proposed by Donoho and Candes (1999), is a multi scale transform like the wavelet transform, with frame elements indexed by scale and location parameters. Unlike the wavelet transform, it has directional parameters, and the curvelet pyramid contains elements with a very high degree of directional specificity. In addition, the curvelet transform is based on a certain anisotropy scaling principle, which is quite different from the isotropic (Non parabolic) scaling of wavelets.
The curvelets are new multi-scale systems, has basis elements, which exhibit
high directional sensitivity and are highly anisotropic. The curvelets are localized
along curves in 2 dimensions. In finer scales the curves can be approximated
to straight edges, which are efficiently represented using ridgelets. The flow
graph of curvelets is shown in Fig. 2, where the curvelet
transform opens the possibility to analyze an image with different block sizes,
but with a single transform. The idea is to first decompose the image into set
of wavelet bands and to analyze each band with an inbuilt ridgelet transform.
The block size can be changed at each scale.
|| Curvelet transform flow graph (Starck et al., 2002)
Curvelets analysis: The procedural definition of the transform is as follows (Donoho and Candes, 1999).
Subband decomposition: The object f is filtered into sub bands by a bank of filters p0 (Δs,s>=0)
The different sub band Δsf contains details about 2-2s
Smooth portioning: WQ(x1, x2) is a smooth window function around dyadic squares.
Windowing is applied to each sub band
where k1,k2 translation is varying and scaling s is fixed.
Renormalization: Each square (WQΔsf)QεQs, is renormalized into unit squares gQ.
Ridgelet analysis: Ridgelet fragment αμ = < gQ,ρλ> where gQ is renormalized square and ρλ ridgelet basis element.
Proposed algorithm for Curvelet based image fusion for 2D images: The procedure for the proposed curvelet based image fusion is as follows, with the fusion of CT image and MRI image as an example to illustrate the method.
The CT image and MRI image are registered geometrically using control points, and resized into 256 X 256 matrix dimension.
The CT image and MRI image are decomposed separately into S1, S2, S3 sub bands by applying atrous filtering algorithm through wavelet band pass filtering using Eq (1)
Each decomposed image includes CJ (S1 subband) which is the coarse or lowpass version of the image and WJ which represents the detail or highpass contents of the image.
The detail contents of the image in the image subband S2 and S3 are partitioned and CJ is kept as it is.
Ridgelet transform is applied over each block of the partitioned images to get ridgelet coefficients for bands S2 and S3.
Steps 2, 3 and 4 are done for MRI image separately to get the MRI ridgelet coefficients at subbands S2 and S3.
Fusion rule is applied over the ridgelet transform coefficients. Two fusion rules are implemented in this work namely addition of coefficients and maximum absolute value of coefficients.
RESULTS AND DISCUSSION
Wavelet Transform and Curvelet transform is applied on the two base images
and the above mentioned fusion rules are used on the resulting coefficients.
Taking the inverse transform gives the fused result as shown in the Fig.
3 a-5 b.
The figures shows the result of various rules.
The same procedure is followed for another pair of registered images of brain
slice (Fig. 6 a and b) and the results after
the application of the mentioned fusion rules are shown in Fig.
7 and 8.
||Image set 1: Result of fusion rule 1 - Maximum of absolute
value of coefficient (a) Wavelet fusion (b) Curvelet fusion
||Image set 1: Result of fusion rule 2 - Addition of coefficients
(a) Wavelet fusion (b) Curvelet fusion
||Image set 2: Result of fusion rule 1 - Maximum of absolute
value of coefficient (a) Wavelet fusion (b) Curvelet fusion
||Image set 2: Result of fusion rule 2 - Addition of coefficients
(a) Wavelet fusion (b) Curvelet fusion
In many important imaging applications, images exhibit edges and discontinuities across curves. In biological imagery, this occurs whenever two organs or tissue structures meet. Especially in image fusion the edge preservation is important in obtaining the complementary details of the input images. As edge representation in curvelets is better, curvelet based image fusion is best suited for medical images in the 2D images compared to other transform techniques.
In the 3D images the inner details will give more visualization and fusion technique is applied to produce more information in all directions.