Subscribe Now Subscribe Today
Abstract
Fulltext PDF
References
Research Article
 
Image Thresholding Using Weighted Parzen-Window Estimation



Wang Jun and Wang Shitong
 
ABSTRACT

A novel image thresholding method based on weighted Parzen-window estimation is proposed in this paper. A hierarchical clustering procedure is first performed to obtain the reference pixels and weights before the weighted Parzen-window procedure is used to estimate the corresponding probabilities. The error produced during reference pixels` generation is controlled by the upper bound error. Using the proposed criterion function, the optimal threshold is computed. Compared with the image thresholding method based on Parzen-window estimation. The experimental results here show that the proposed method can effectively reduce the computational burden and storage requirements without degrading final segmentation results a lot.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Wang Jun and Wang Shitong, 2008. Image Thresholding Using Weighted Parzen-Window Estimation. Journal of Applied Sciences, 8: 772-779.

DOI: 10.3923/jas.2008.772.779

URL: https://scialert.net/abstract/?doi=jas.2008.772.779

INTRODUCTION

The problem of image segmentation throws great challenges for pattern recognition and image processing community. Image thresholding is the simplest technique for image segmentation and its primary task is to find the optimal threshold which could separate objects from background well.

With the advantage of simple and easy implementation, the global thresholding is still a popular technique over the past years. Several successful thresholding methods such as OTSU methods, histogram-based methods, entropy-based method become popular (Sezgin and Sankur, 2004; Zhang, 2001; de Albuquerque et al., 2004; Sarkar and Soundararajan, 2000; Yang and Yang, 2004). Considering the analogy between clustering and image segmentation, some clustering procedures based on pdf estimation have been successfully applied to image segmentation. Recently, a novel thresholding method based on Parzen-window estimation (i.e., PWT) has been proposed by Wang et al. (2008). This method effectively integrates spatial information on pixels of different gray levels into the process pdf estimation which also the base of our work here.

In this study, the concept of weighted Parzen-window estimation is presented and a novel image thresholding method based on this concept is accordingly proposed. By combining several pixels which are close to each other, a reference pixel is generated and it will be used in the weighted Parzen-window estimation procedure. In this way, the number of the samples used in Parzen-window estimation decreases greatly, which may further reduce the computational burden and storage requirements of image thresholding.

IMAGE THRESHOLDING USING PARZEN-WINDOW ESTIMATION

With excellent performance and solid theoretical foundation, the Parzen-window estimation is a well-known non-parametric approach for probability estimation. Here, we state a novel thresholding algorithm based on Parzen-window technique in Wang et al. (2008) as follows:

Assuming f (x, y) is the gray level of the pixel with its location (x, y) in image, an mxn sized image with L gray levels could be expressed as {f(x, y) | xε{1, 2, ..., m}, yε{1, 2, ..., n}}. Let ωi = {(x, y) | f(x, y) = i, x ε{1, 2, ..., m}, yε{1, 2, ..., n}, i ε{1, 2, ..., L}} and Ci denotes the number of pixels in ωi, i = 1, 2..., L, we have {f (x, y) | xε {1, 2, ..., n}}, then

Obviously, ωi is defined in a two-dimensional space. According to the Parzen-window estimation, for the point space ωi, the probability density function p (x, y, ωi) can be approximated using the following formulas:

(1)

(2)

Where:
hci = Window width of the ith gray level set
(xj, yj) = The jth pixel in ωi,
Vci = The volume of the corresponding hypercube with hci as its length of each edge, i.e., Vci = Hci2 for an image and

is a good choice; p(ωi) can be approximated by Ci/N, 1≤i≤L and φ(·) is a kernel function.

Because of the continuous distribution of pixels with same gray level in images, a pixel’s possibility of having gray level i is in inverse proportion to its Euclidean distance to other pixels whose gray levels are i. The following Gaussian function for φ(·) reflects this fact well:

(3)

Let p(x, y) be the probability density of pixel (x, y) belonging to the background class and r(x, y) be the probability density of pixel (x, y) belonging to the object, we have:

(4)


(5)

According to the concept of entropy in information theory, the entropy of pixel (x, y) could be computed as follows:

(6)

bviously, H(x, y) is maximized when p(x, y) = r(x, y) is satisfied. It implies that the smaller |p (x, y)-r(x, y)| takes, the larger H(x, y) gets. Therefore, In order to make class p and class r well separated, the following criterion function was used:

(7)

Where, Ω = [1, m]x[1, n].

The Parzen-window estimation for p(x, y) and r(x, y) reflects the fact that the pixel’s possibility of being gray level i is determined by its distance to other pixels with same gray level i. However, in order to compute p(x, y) and r(x, y) for a pixel at (x, y) in an image, all the pixels except itself will be considered and this will bring very big computational burden.

WEIGHTED PARZEN-WINDOW METHOD FOR IMAGE THREDSHOLDING

In order to accelerate the PWT procedure and reduce the computational burden, we will derive its enhanced version i.e., image thresholding WPWT based on weighted Parzen-window estimation. In the proposed method WPWT, a training procedure is performed to obtain a set of the reference pixels and weights at each gray level, which undoubtly yields errors into the computation of topt. However, by controlling the upper bound of errors we can make thresholding results acceptable.

The set of the reference pixels at gray level i is denoted as:

Accordingly, the set of the weights for the reference pixels at gray level i is denoted as:

So, as an approximation of (1), p (w, y | ωi) in (1) can be computed as:

(8)

Where:
wij = Weight of the jth reference pixel at gray level i, which is equal to the number of the original training samples that were collapsed into (xij(r), yij(r))

The training procedure for a single gray-level: In order to find a set of appropriate reference pixels for effective weighted Parzen-window estimation, we derive a training procedure based on the idea of hierarchical clustering. We state the training procedure as the following pseudo-code:

n : :Number of the pixels at each gray level
m : Number of the reference pixels at each gray level
Ri : Vector of the reference pixels at gray level i
rij : The jth reference pixel of gray level i
Wi : Vector of weights for the reference pixels at gray level i
wij : Weight of the jth reference pixel at gray level i
DM : Distance matrix of the reference pixels at each gray level

For the above training procedure, we give more details as follows.

This procedure initializes Ri with the pixels of gray level i in the image. Then Ri is updated by combining two nearest pixels until the error e exceeds emax or only one reference pixel remains.

The sub-routine getNearestPair () returns two nearest pairs r1, r2 and store their weights into w1 and w2, respectively. Then r1, r2 are merged in the sub-routine mergePair ().

The sub-routine mergePair () involves the following key issues. Firstly,

and ω0 = ωij

are inserted into Ri and Wi, respectively. Meanwhile, r1, r2 are removed from Ri and so ωi and ωj are done from Wi. Then, m is decreased by 1. Finally, the distance matrix DM is updated by calculating the distance from the newly generated reference pixel r0 to each reference pixel in DM except r0.

According to the newly generated reference vector r0 and ω0, Pm is updated in UpdatePm () and the error e is recalculated using

in the sub-routine ComputeError ().

The temporary vector R0, W0 and variable m0 are used to back up the past values of Ri, Wi and m. As we can see from above pseudo-code, they take the last values of Ri, Wi and m, which have been updated in the sub-routine mergePair (). Ri, Wi and m will recover to their past values from R0, W0 and m0 if e exceeds emax.

The proposed algorithm WPWT: To derive the new version of p (x, y) using weighted Parzen-window estimation, the following Gaussian kernel function is introduced:

(9)

For Gaussian Kernel function, the following two important results are used (Torkkola, 2003):

(10)

Thus with the obtained reference pixels and weights, we can have:

(11)

Furthermore, we have:

(12)

Similarly, we have:

(13)

(14)

For ease of computation, let:

(15)

We have:

(16)

(17)

(18)

Due to the symmetry of elements in M, we only show the upper triangular matrix in Fig. 2. As we can readily see, for some t, (16), (17) and (18) corresponds to the sums of elements in area A, B and C, respectively.

Thus, topt can be computed as:

(19)

Based on the above, we summarize the novel image thresholding method WPWT as follows:

Construct the histogram for an N = mxn image with L gray levels.
For each gray level, compute the reference pixels using the training algorithm in Fig. 1.
Construct the matrix M using (15).
For each gray level, compute the corresponding t and find top
Segment the image with topt and output the segmented image accordingly

Fig. 1: The training procedure

Fig. 2: Upper triangular matrix of M

RESULTS

Experiments were conducted with several images provided by MATLAB and these images have been widely used in literature. We implemented the proposed algorithm WPWT with MATLAB 7 and run it on the computer with P4 2.66GHz CPU and 512M memory. In order to observe how emax values affect the performance of our algorithm, we used different emax values varying from 0.1 to 0.4 with step length 0.1.

The first experiment is used to show how different emax values affect the numbers of the generated reference pixels (Table 1).

Fig. 3: Segmentation results of WPWT and PWT in Wang et al. (2008)

Fig. 4: Segmentation results on noisy images

Table 1:

The number of the reference pixels obtained by the training procedure


Table 2: Running time of WPWT and PWT (seconds)

The second experiment is taken to show the influence of different emax values on running speed of the algorithm as well as visual effects of the thresholding results. The complexity of both PWT and our proposed method is O (n2/L), where n is the total number of pixels in the image and L is the number of gray levels. In our proposed method, n is decreased greatly by controlling emax so that the performance of our algorithm can be enhanced. In order to make a comparison with PWT in Wang et al. (2008), Table 2 shows the running time of both WPWT and PWT with different emax.

An appropriate emax is vital to the running speed of Parzen-window estimation. According to our experiments, for many images, emax = 0.3 is a good choice for the speed up of the proposed algorithm and the final visual effects of the segmented images. Figure 3 shows the corresponding segmentation results of both WPWT and PWT.

Obviously, emax has an important influence on the optimal threshold topt. However, when emax ε [0.2, 0.4], all the obtained segmentation results are very acceptable from the viewpoint of the visual effects for the segmented images.

The final experiment shows the performance of our algorithm on noisy images. In this experiment, white noise is added into the original image and the segmentation results obtained by the WPWT method are shown in Fig. 4. From the obtained results, we can see that the proposed algorithm can obtain almost the same results with that the original images, which verifies the robustness of the proposed algorithm on noisy images.

CONCLUSION

We propose a novel image thresholding method based on weighted Parzen-window estimation in this study. In our method, we decrease the number of pixels by using the Parzen-window estimation with a hierarchical clustering procedure which calculates a set of the reference pixels and weights by merging the corresponding pixel pairs. During this process, the error produced by merging pixel pairs is controlled by emax. Then the thresholding algorithm based on weighted Parzen-window estimation (i.e., WPWT) is presented. Unlike PWT in Wang et al. (2008), the proposed WPWT here approximates the probability of each pixel with the reference pixels and weights. Compared with PWT, the experimental results here show that our method can accelerate the image thresholding and reduce the space requirements greatly without degrading final segmentation results a lot.

ACKNOWLEDGMENTS

This study is supported by National 973 Key Project (Grant No. 2006CB705700), National Science Foundation of China, National 863 Project (Grant No.2007AA1Z158), National Defense Basic Research Grant, New_century Outstanding Young Scholar Grant of Ministry of Education of China (Grant No. NCET-04-0496), National KeySoft Lab. at Nanjing University, the Key Lab. of Computer Science at Institute of Software, CAS, China, the Key Lab. of Computer Information Technologies at JiangSu Province, the 2004 key project of Ministry of Education of China and National Key Lab. of Pattern Recognition at Institute of Automation, CAS, China.

REFERENCES
De-Albuquerque, M.P., I.A. Esquef and A.R. Gesualdi-Mello, 2004. Image thresholding using Tsallis entropy. Pattern Recog. Lett., 25: 1059-1065.
Direct Link  |  

Sarkar, S. and P. Soundararajan, 2000. Supervised learning of large perceptual organization: Graph spectral partitioning and learning automata. IEEE Trans. Pattern Anal. Mach. Intel., 22: 504-525.
Direct Link  |  

Sezgin, M. and B. Sankur, 2004. Survey over image thresholding techniques and quantitative performance evaluation. J. Electron. Imaging, 13: 146-165.
CrossRef  |  Direct Link  |  

Torkkola, K., 2003. Feature extraction by non-parametric mutual information maximization. J. Mach. Learning Res., 3: 1415-1438.
Direct Link  |  

Wang, S.T., F.L. Chung and F.S. Xiong, 2008. A novel image thresholding method based on Parzen window estimate. Pattern Recognit., 41: 117-129.
CrossRef  |  Direct Link  |  

Yang, L. and X. Yang, 2004. Multi-object segmentation based on curve evolving and region division. Chin. J. Comput., 27: 420-425.
Direct Link  |  

Zhang, Y.J., 2001. Image Segmentation. Press of Science, Beijing.

©  2019 Science Alert. All Rights Reserved
Fulltext PDF References Abstract