ABSTRACT
A new defect detection algorithm base on Support Vector Data Description (SVDD) is proposed. A fabric texture model is built on the gray-level histogram of textural fabric image. Two Gray-level Co-occurrence Matrix (GLCM) features are used to characterize the fabric texture. And an adaptive quantization scheme base on the texture mode is proposed to reduce the size of GLCM and reduce the computational complexity of feature extraction. Besides, two new features are proposed to characterize the continuous property of the fabric defects. The SVDD classifier is used as a detector for defect detection. Experimental results of real fabric defects are provided to validate the effectiveness and robustness of the proposed detection algorithm. And a prototyped detection system is built to evaluate the real-time performance of the detection algorithm.
PDF Abstract XML References Citation
How to cite this article
DOI: 10.3923/itj.2012.673.685
URL: https://scialert.net/abstract/?doi=itj.2012.673.685
INTRODUCTION
Fabric defect detection is one of the most important procedures effecting the manufacturing efficiency and quality in fabric industry. So far, most factories accomplish this study by human vision which makes the quality of fabric devoid of consistency and reliability. It is found that even a highly trained inspector can only detect about 70% of the defects at a speed of 15-20 m min-1 (Sari-Sarraf and Goddard, 1999). As the development of image processing technology, many methods base on texture analysis were proposed for defect detection. Several statistical methods using texture features, such as fractal dimension (Conci and Proenca, 1998; Bu et al., 2009), morphological feature (Mallick-Goswami and Datta, 2000; Chandra et al., 2010) and co-occurrence matrix (Latif-Amet et al., 2000; Lin, 2010) were proposed to discriminate defects from normal texture. Cohen et al. (1991) used the Gauss Markov Random Field (GMRF) to model the texture pattern of non-defective fabric image and the defect is detected by the rejection of the model using hypothesis testing theory. Because of the high degree of periodicity of the fabric texture, the spectral approaches are also used for fabric defect detection. Chan and Pang (2000) used Fourier transform for this task which turned out to be only suitable for global defect detection because of its poor local resolution in the frequency domain. Defect detection methods with multiresolution decomposition using a bank of Gabor filters were proposed by Bodnarova et al. (2002) and Kumar and Pang (2002). As Gabor filter banks will lead to redundant features at different scales, the orthogonal wavelet transform (Nadhim, 2006) are also used for texture characterization (Loum et al., 2007), classification (Raju et al., 2008) and fabric defect detection (Yang et al., 2002; Mingde and Zhigang, 2011).
The defect detection can be considered as a one-class classification problem. Ohanian and Dubes (1992) and Randen and Husoy (1999) provide some occasions that, in the viewpoint of texture classification, GLCM outperforms other features such as Gabor filters feature, fractal feature, MRF feature. However GLCM is still degraded for its high computational complexity. In this study, an adaptive quantization method is proposed base on the texture model of mixture of two Gauss distribution. The quantization method reduces the gray-level from 256 levels to several levels which greatly reduces the computation complexity of GLCM features. The SVDD classifier is used as a detector for defect detection. Several machine learning based texture classification methods have been proposed using neutral network (Kumar, 2003; Jianli and Baoqi, 2007; Chandra et al., 2010; Nagarajan and Balasubramanie, 2008; Mahi and Izabatene, 2011), Support Vector Machine (SVM) (Kumar and Shen, 2002; Chu et al., 2011) and SVDD (Bu et al., 2009, 2010). The neutral network and support vector based methods considered the defect detection as a two-class classification problem which required both defective and non-defective samples for training. However, the requirement of large quantities of defective samples is usually not desirable for online inspection in industrial occasions. Similar to SVM, the SVDD classifier is also a kernel function based classifier which does not suffer the problem of local minima. However, it is a one-class classifier which requires only the non-defective sample for training, thus it is adopted in our algorithm.
FABRIC TEXTURE MODEL
Figure 1a and b show two samples of non-defective texture of plain and twill fabrics, respectively. It can be see that the plain and twill texture are, respectively, made up of one and two gray tones which is also indicated in their histograms in Fig. 1c and d by solid lines. The fabric texture model is built on the histogram of gray-level image of fabric texture. The histogram of the plain fabric texture tends to obey the Gauss distribution and the histogram of the twill fabric texture tends to obey a mixture of two Gauss distributions. So, the Probability Density Function (PDF) of gray-level in the plain fabric image can be modeled as:
![]() | (1) |
where, μ and σ are the mean and standard deviation of the Gauss distribution, respectively. And the PDF of gray-level in the twill fabric image can be modeled as:
![]() | (2) |
where, P1+P2 = 1 and μ1≤μ2. P1, μ1, σ1 and P2, μ2, σ2 are the proportions, means and standard deviations of two Gaussians in the mixture, respectively.
![]() | |
Fig. 1(a-d): | Non-defective samples of (a) plain, (b) twill fabrics and (c, d) corresponding histogram and fitting results |
In order to build a consistent model for both plain and twill fabrics, only Eq. 2 is used and Eq. 1 is considered as a particular case of Eq. 2 with P1 = 1, P2 = 0, μ1 = μ2 and σ1 = σ2. And parameters of mixed Gauss distribution in Eq. 2 can be estimated by curve fitting. The fitting results of both plain and twill fabric image histograms are illustrated in Fig. 1c and d with dash lines, respectively.
GRAY-LEVEL CO-OCCURRENCE MATRIX AND ADAPTIVE QUANTIZATION
Spatial gray-level co-occurrence estimates image properties related to second-order statistics. It turns out that GLCM has become one of the most well-known texture features and is widely used for texture characterization (Shang et al., 2011; Sheng et al., 2010). A co-occurrence matrix is a square matrix whose elements correspond to the relative frequency of occurrence of pairs of gray level values of pixels separated by a certain distance in a given direction. The GxG gray-level co-occurrence matrix Pd for a displacement vector d = (dx, dy) is defined as follows. The entry (I, j) of Pd is the number of occurrences of the pair of gray levels I and j which are a distance d apart. Formally, it is given as:
![]() | (3) |
where, I denotes an image of size UxV with G gray values (x1, y1) (x2, y2), UxV (x2, y2) = (x1+dx, y1+dy) and |.| is the cardinality of a set. The co-occurrence matrix reveals certain properties about the spatial distribution of the gray levels in the texture image.
Generally, for a gray-level image whose pixel is represented by an 8-bit integer (i.e., G = 256), its GLCM is a matrix of size 256x256. Extracting features from such a large matrix is quite computational expensive. An intuitive way to reduce the size of GLCM is to reduce the gray level G of original image by equal quantization (Latif-Amet et al., 2000). However, during the equal quantization if the gray level of the defect is close to the normal texture, it is probably quantized to the same level as the normal texture which makes it hard to distinguish. Figure 2a and b illustrate a defective fabric image and its equal quantization result with 16 levels, respectively. The quantized image is histogram equalized for visual convenience. It can be seen that the defect in Fig. 2b becomes ambiguous and hard to detect.
In this study, an adaptive quantization method is proposed based on the fabric texture model elaborated in earlier. As shown in Fig. 1, the histogram of gray-level of normal texture obeys a mixture Gauss distribution which contains two Gaussians. Due to the randomization of texture intensity, to which Gaussian a single pixel belongs and the distance between its gray-level and the center of its belonging Gaussian which is considered as Gaussian Central Distance (GCD), is of more importance than in which exact gray-level the pixel is. The pixel in the image is quantized by examining its GCD. Prior to the calculation of GCD, we should decide which of the two Gaussians the pixel belongs to. We consider that if gray-level g<μ1, then it belongs to the first Gaussian and if g>μ2 then it belongs to the second Gaussian. For the gray-level μ1<g<μ2, the following probability criterion is used to determine its belonging Gaussian:
![]() | (4) |
where, ωi, I = 1and 2, denote first and second Gaussian, respectively, P (ωi) is the prior probability of each Gaussian, i.e., P (ω1) = P1 and P (ω2) = P2 in Eq. 2.
![]() | |
Fig. 2(a-c): | Comparison of quantization methods |
If P (ω1|g)>P (ω2|g), then g is considered to belong to first Gaussian, otherwise it belongs to the second Gaussian. This classification method is known as Bayes classification which minimizes the misclassification rate. Formally the adaptive quantization method consists of following steps:
Step 1: | Divide both Gaussians into several non-overlapping intervals separately formulated as [0, μi-(0.5+N) λ σi), [μi+(0.5+n) λ σi, μi+(0.5+n+1) λ σi) and [μi+(0.5+N) λ σi, 255], where n = -(N+1), -N, , N-1. λ is a constant determining the width of the interval and each Gaussian is divided into 2N+3 intervals |
Step 2: | For each gray-level g from 0 to 255, determine which Gaussian it belongs to and which interval of the Gaussian it belongs to, then vote for that interval. The intervals which are never voted are discarded and the completely voted intervals are preserved. Here, the complete voted interval refers to the interval that all the gray-levels within it only vote for itself |
Step 3: | For the rest intervals which are not completely voted. If any of them overlaps with any other non-complete voted intervals, then merger them together to form a new interval. If the merged interval overlaps with any complete voted interval, then truncate the overlapping areas of the merged interval to make sure that all the intervals are non-overlapping. Finally, index all the intervals with zero-based numbers and make sure that larger gray-levels correspond to larger index. Each interval forms a new quantized level. |
Figure 3 gives an example of adaptive quantization procedure. Figure 3a and b illustrate the division results of two Gaussians of a mixture distribution with λ = 1, N = 2. Each Gaussian is divided into 7 intervals. The final quantization result is illustrated in Fig. 3c. The intervals [0, 54), [54, 64), [64, 74) and [74, 84) of the first Gaussian and [99, 125), [126, 151), [151, 176) and [176, 255] of the second Gaussian are completely voted intervals. Thus each of them directly forms a quantized level in Fig. 3c. The intervals [84, 94) of the first Gaussian and [74, 99) of the second Gaussian are non-completely voted intervals and they overlap with each other. So they are merged into [74, 99]. As [74, 84) is a completely voted interval, the merged interval is truncated into [84, 94) which forms a new a quantized level in Fig. 3c. Finally, 256 gray-levels are quantized to 9 levels.
![]() | |
Fig. 3(a-c): | Example of adaptive quantization procedure |
The pixel is quantized according to its GCD which is measured by the standard deviation of the belonging Gaussian rather than its gray-level. Pixels belonging to the same Gaussian with similar GCD rather than similar gray-levels tend to be quantized to the same level. Particularly, quantized level 0 and L-1 are two special levels, where L denotes the total number of quantized levels. It can be seen that quantized level 0 is out of the confidence interval of the first Gaussian which means, in the sense of hypothesis testing, the pixels within this quantized level do not belong to the normal texture. As the pixels in this level are much darker than the normal texture, they are called darkness exceptional pixels. Similarly the pixels in quantized level L-1 are called lightness exceptional pixels. The advantage of adaptive quantization method is that it can preserve most useful information of the original image while reduce the 256 gray-levels to several quantized levels. Besides, as the quantization is based on GCD of the texture model, the defective region does not obey the texture model and thus contains more lightness or darkness exceptional pixels, so the quantization method can also emphasize the presence of the defects. Figure 2c shows the result of adaptive quantization of Fig. 2a with only 9 quantized levels which is even better than Fig. 2b with 16 levels.
SUPPORT VECTOR DESCRIPTION DATA (SVDD)
Support vector data description is a powerful kernel method that has been commonly used for novelty detection (Tax and Duin, 2004). It provides a solution to the one-class classification problem without any negative sample in training. By mapping the data into a higher dimensional space, the objective of SVDD is to find, in this space, a spherically shaped boundary around the training dataset such that the sphere can enclose as many samples as possible while having minimum volume. The sphere is characterized by its center c and radius R>0. Let {xi}, I = 1, 2,.., M, be a set of training examples, xiεRd, d being the dimension of the input space and M is the number of training samples. The minimization of the sphere volume is achieved by minimizing its square radius R2. To allow for the presence of outliers, slack variables ξis are introduced so that the problem of constructing an optimal separating hypersphere is converted to the following optimization problem:
![]() | (5) |
where, ξi accounts for possible errors, v is a user-provided parameter specifying an upper bound on the fraction of outliers and controls the trade-off between the hypersphere volume and the errors and Φ is a map function which maps input data into higher dimensional space. The corresponding dual problem is:
![]() | (6) |
where, α = {α1, α2, , αM} is called lagrange multiplier vector, K(-,-) is called kernel function such that K(xi, xj) = Φ (xi)·Φ (xj) and a most widely used kernel function is the Radial Basic Function (RBF):
![]() | (7) |
The optimization problem in Eq. 6 can be solved using standard quadratic programming methods and an optimal solution for α can be obtained. All xi corresponding to non-zero αi are called Support Vectors (SV) which usually occupy a small quantity of the training data. Then the optimal solution for c is:
![]() |
On testing, for a new sample x, it is subjected to the map function Φ and if the distance from the mapped data to the center of the optimal hypersphere is smaller or equal than the radius of the hypersphere, then it is accepted, otherwise it is rejected which can be formulated as:
![]() | (8) |
Because optimal hypersphere is found by solving a quadratic programming problem, the SVDD do not suffer the problem of local minimum which means that SVDD training can always find a global minimum, thus has a good generalization capability.
DEFECT DETECTION ALGORITHM
Detection of detects is considered as a one-class classification problem. The defect detection algorithm is divided into two phases: Learning phase and classification phase. In the learning phase the fabric image is quantized and a set of GLCM features, as well as some new features are extracted from the quantized image to characterize the fabric texture. The extracted features are used as training data for SVDD training to generate a SVDD classifier. In classification phase the same features are extracted from the quantized testing image and subjected to the SVDD classifier to determine whether it is defective or not.
Feature extraction: Several features extracted from GLCM are used in the proposed detection algorithm. Generally, for GLCM, the displacement parameter (dx, dy), by which two highly related pixels are departed, is a good selection to characterize the fabric texture. A natural selection of displacement parameters for texture characterization is (0, 1) (1, 1) (1, 0) and (1, -1) which are also used for fabric texture characterization by Latif-Amet et al. (2000) and Lin (2010), since the neighboring pixels are considered highly related. These four displacement parameters correspond to 0, 45, 90 and 135°, respectively, where most defects are present. Chan and Pang (2000) find out from the frequency spectrum of the fabric texture image that texture periodicities are existing in the warp (0°) and fill (90°) direction in the fabric texture which can be calculated as the reciprocal of the first harmonic frequency f0 and f90 along warp and fill direction, respectively. Because of the highly periodicity of the fabric texture, two pixels departed by a texture periodicity are also considered highly related. Therefore, two additional displacement parameters are used in our algorithm: (0, T0) (T90, 0), where T0 and T90 denote the texture periodicities at 0 and 90°, respectively, that is T0 = 1/f0, T90= 1/f90. f0 and f90 can be obtained from the 1-D Fourier spectrum of fabric texture at 0 and 90°, respectively. Figure 4 shows a Fourier spectrum of a real fabric texture at 0°, the fabric image is made zero mean in advance to suppress the Direct Current (DC) component. Because T0 and T90 are usually floating-points while displacement parameters must be integers for the computation of GLCM, the first-order linear interpolation is used to estimate the values of I (x+dx, y+dy) where dx, dy are float-points. Haralick et al. (1973) proposed 14 features from GLCM for texture classification, in this study only two of them, namely contrast CON and inverse difference moment IDM, are used:
![]() | (9) |
![]() | (10) |
where, L is the column (or row) number of GLCM (i.e., the total quantized levels), p (I, j) refers to the normalized entry of the co-occurrence matrices. That is p (I, j) = Pd(I, j)/R, where Pd (I, j) is the GLCM with displacement parameter d and R is the total number of pixel pairs (I, j).
In addition, we propose 2 extra features for each displacement parameter (dx, dy) based on following conceptions. As discussed in earlier the pixel in the original image is quantized by its GCD and the pixel with larger GCD means more unlikely it belongs to the normal texture and its intensity tend to be much darker or lighter than the normal texture. It is found that defects, within their boundaries, tend to have more dark pixels (e.g., oil-stain, dirty-yarn, etc.) or light pixels (e.g., miss-pick, thin-place, etc.) than the normal texture. In turn large quantities of dark pixels or light pixels within a local region may indicate a defect in that region. Figure 5 shows a data fragment of quantized image of Fig. 2a with L = 8. All the lightness exceptional pixels whose quantized level is larger than a threshold UL which corresponds to the upper limit of 70% confidence interval of the second Gaussian, are marked by rectangles. There are mainly three reasons for appearance of lightness exceptional pixels: variation of normal texture, the noise and the real defects.
![]() | |
Fig. 4: | Magnitude of frequency spectrum of fabric image along horizontal direction |
![]() | |
Fig. 5: | Data fragment of quantized image of Fig. 2a |
Because of the random property of variation of normal texture and the noise, lightness exceptional pixels and darkness exceptional pixels tend to scattered within the fabric image. However, the real defects tend to be continuous and constitute a small portion of field. Thus it is obvious that the connected lightness exceptional pixels in the center of Fig. 5 indicate a defect corresponding to the mispick in Fig. 2a and other scattered lightness exceptional pixels are caused by the variation of normal texture or the noise. A feature is proposed to characterize this property of defect.
Given a displacement parameter (dx, dy), let Q be the quantized image, uk = x+k·dx, vk = y+k·dy, if for all k = 0, 1, , C-1, Q (uk, vk) are larger than UL and for k = -1 and C, Q (uk, vk) are not larger than UL, then points in position (uk, vk) (k = 0, 1, , C -1) are considered as a Lightness Exceptional Run (LER). C denotes the length of the LER. The feature Long Lightness Exceptional Run Emphasis (LLERE) is defined as:
![]() | (11) |
where, Sl denotes a set containing all the LERs whose length is larger than l in quantized image matrix. Cs and s (k) denote the length and the k-th element of the LER s. The feature LLERE is similar to the feature long run emphasis of Gray Level Run Length Matrix (GLRLM) (Galloway, 1975) which has been widely used for texture characterization and classification. They are both built on the statistic of consecutive pixels with the same attribution. However, in Eq. 11 the LERs whose length is smaller than or equal to l which are probably caused by the variation of normal texture or the noise, are not involved in the computation so that LLERE can emphasize the presence of real defects. l is set to the 95th percentile of the order statistics obtained from non-defective image samples. Similarly, another feature called Long Darkness Exceptional Run Emphasis (LDERE) which is the counterpart of LLERE is also used as a feature:
![]() | (12) |
where, Zl denotes a set containing all the Darkness Exceptional Runs (DER) whose length is larger than l in quantized image matrix. The definition of DER is similar to LER except that the values of its elements are smaller than a threshold DL which corresponds to the lower limit of 70% confidence interval of the first Gaussian. Cz and z (k) denote the length and the kth element of the DER z. Compared to the features extracted from GLCM, feature LLERE and LDERE also characterize the relationship of pixels separated by a certain distance in a given direction but they put more emphasis on the continuousness of exceptional pixels along that direction which makes them suitable to detect tiny directional defects. In summary 6 displacement parameters are used and 4 features are extracted for each displacement parameter. In all 24 features are extracted to form a feature vector.
Learning phase: Images of non-defective texture are used in the learning phase. All these images are divided into non-overlapping subregions. The feature vectors Vm, m = 1, 2, , M are extracted from subregions using the feature extraction method proposed earlier, where M denotes the total number of non-defective subregions. Then the feature vectors are normalized as:
![]() | (13) |
Where:
![]() | (14) |
![]() | (15) |
r = 0, 1, 2, is the feature index of the feature vector, NVm denotes the normalized feature vector, ηr and βr are called offset coefficient and scale factor of the normalization, respectively. The objective of the normalization is to set the values of all the features in the feature vector within interval [0, 1] and make sure that all the features in the feature vector have nearly the same weight. Then these feature vectors are subjected to the SVDD training. During the training, there are two parameters should be decided: the trade-off parameter v in Eq. 6 and the RBF width parameter σ in Eq. 7. A large v allows more outliers of the hypersphere in the training dataset which corresponds to larger rejection rate and larger fraction of support vectors. Tax and Duin (2004) find out that the fraction of support vectors relates to the false alarm rate, so the parameter v can be decided by the expected false alarm rate. To choose the optimal value of parameters σ, the cross validation method is used in the SVDD training.
Classification phase: All the fabric images under inspection are divided into non-overlapping regions of the same size as in learning phase. From each subregion, a feature vector E is extracted using the feature extraction method proposed earlier. Then the feature vector E is normalized using ηr and βr which have been computed in the learning phase:
![]() | (16) |
Different from the normalization in learning stage, the values of normalized features NE (r) can be less than zero or greater than one. The normalized feature vector NE is then subjected to the SVDD classifier and the final classification result can be acquired by Eq. 8. If its output is 1, then the subregion is considered as non-defective otherwise it is considered as defective.
RESULTS AND DISCUSSION
Three datasets containing one plain and two twill fabrics of different texture background are used to evaluate the performance of the fabric detection algorithm. All the three fabrics and defects on them are produced in factory practice. All of the images are acquired by line scan CCD camera with a spatial resolution of 0.2 mm/pixel against a backlighting illumination and digitalized into 256x256 pixels with a gray level of 256. Detailed information of the three datasets is presented in Table 1. Each image is divided into non-overlapping subregions of size 64x64 pixels and each subregion is considered as one sample for defect detection.
In the training phase, for each dataset, 1000 non-defective samples are used for SVDD training. The training parameter v is set to 0.01 and σ is decided by 10-fold cross validation. Figure 6a and b illustrate the cross validation accuracy and the proportion of support vectors with various values of σ2, where σ2 = 2-3, 24, , 215 are selected as candidates which is suggested by Hsu et al. (2003). It can be seen from Fig. 6a that both too small and too larger values of σ2 result in low cross validation accuracy. Large value of σ2 will create a simple decision boundary with small proportion of support vectors which is unable to separate the defective and non-defective samples and leads to high miss rate. While small value of σ2 will create an excessive complex decision boundary with large proportion of support vectors which leads to over-fitting and high false alarm rate. σ2 = 27, 26 and 24 with highest cross validation accuracy are selected for the three dataset, respectively and their corresponding support vector proportion are 1.3, 2.3 and 1.3%, respectively which only occupy a very small portion of total training vectors. It can be seen from Fig. 6a that the values of cross validation accuracy nearby the optimal value of σ2 are nearly stationary, so the cross validation method for finding optimal σ2 is robust.
Figure 7 illustrates the adaptive quantization and detection results of several typical defective samples of three datasets with quantization parameter λ = 1, N = 2. Dataset 1, 2 and 3 are quantized to 8, 9 and 10 levels, respectively.
Table 1: | Information of experimental datasets |
![]() | |
We can see that after the quantization all the defects are still clear and intact but the number of gray-levels is decreased from 256 to 8, 9 and 10, respectively which greatly reduced the computational load of GLCM and its features.
Quantization parameter selection: As mentioned earlier, there are two important parameters affecting the procedure of quantization. One is the interval width λ, the other one is N which relates to the total number of quantized levels. The interval [μi-(0.5+N) λ σi), μi+(0.5+N) λ σi] constitutes a confidence interval of the I-th Gaussian. Gray-levels out of this interval are considered either dark exceptional or light exceptional. Large value of N corresponds to more computational load, while small value of N may loses some detailed information of the defects and makes them hard to detect. In order to find the optimal parameters, several pairs of (λ, N) are used to the three datasets. The detection results of the three datasets of different parameter pairs of (λ, N) are presented in Table 2-4, respectively. λ is set inversely proportional to N so that the confidence intervals of all parameter pairs are nearly the same. As the false alarm relates to the trade-off parameter v in Eq. 6, it does not have large variation for different parameter pairs. For N = 0 to 2, the miss detection rate decreases intensively which means the detection performance is getting much better, while for N = 3 to 6 the miss detection rate does not have great improvement and it changes inversely to the false alarm rate which means the detection performance does not greatly improved but adding more computational load. As the increase of value of N, more detailed information of fabric texture is preserved. For defects miss-yarn of Dataset 1 and dirty-yarn of Dataset 2 only small value of N is sufficient to defect most of them, because they contain lots of dark or light exceptional pixels whose gray-level are out of the confidence interval.
![]() | |
Fig. 6(a-b): | The cross validation accuracy and the proportion of support vectors with various values of σ2 |
![]() | |
Fig. 7(a-l): | (a, b) Miss-pick and miss-yarn of Dataset 1, (c, d) dirty-yarn and hole of Dataset 2, (e, f) miss-pick and thin-place of Dataset 3 and (d-l) respectively; corresponding quantization images and detection result |
Table 2: | Detection results of Dataset 1 with different parameter pairs (λ, N) |
![]() | |
Values in brackets are percentage |
Table 3: | Detection results of Dataset 2 with different parameter pairs (λ, N) |
![]() | |
Values in brackets are percentage |
Generally, N = 2 and 3 are optimal parameters which can be used to detect most of the defects.
Characteristic of features: The proposed algorithm use all 24 features described, to detect different kinds of defects. In order to know the specific characteristic of each feature, we also investigate on the discriminative features for each kind of defects.
Table 4: | Detection results of Dataset 3 with different parameter pairs (λ, N) |
![]() | |
Values in brackets are percentage |
Table 5: | Discriminative features of each kind of defects in Fig. 7 |
![]() | |
Here, the discriminative features refer to the normalized features, among all 24 features which have large distances between the defective samples and non-defective samples. A criterion function is used to characterize this distance which is formulated as:
![]() | (17) |
where, Un and σn are the mean value and standard deviation of the normalized feature extracted from the non-defective subregions in the learning phase (Eq. 13), Ud is the mean value of the normalized feature extracted from defective subregions in classification phase (Eq. 16). The higher magnitude of J indicates larger distance between the feature of the defect and normal texture which means better detection performance. Table 5 shows the discriminative features and their values of J for each kind of defect in Fig. 7, only the displacement parameters and features which have large J are presented, others are omitted. Generally, the value of J in Table 5 is consistent with the detection rate in Tables 2-4, that is larger value of J corresponds to higher detection rate. It also suggests that the features LLERE and LDERE are more effective than CON and IDM to detect tiny directional defects such as miss-pick of Dataset 1, dirty-yarn of Dataset 2 and thin-place of Dataset 3, because CON and IDM characterize the global texture pattern and not quite sensitive to the local textural change caused by tiny defects. However, LLERE and LDERE emphasize the continuousness of lightness exceptional pixels and darkness exceptional pixels, respectively, so they are suitable to characterize tiny directional defects. All of features LLERE, LDERE, CON, IDM with displacement parameter (0,1) have large value of J for defect miss-yarn, because the texture pattern of miss-yarn is quite different from normal texture and containing lots of dark and light exceptional pixels in horizontal orientation. Particularly, for the defect thin-place in Fig. 7e which is not even obvious to the human inspector and thus very difficult detect, its defective pixels (lightness exceptional pixels) are arranged periodically in horizontal direction, so feature LLERE with displacement parameter (0, T0) can characterize it.
Real-time performance: In order to evaluate the real-time performance of our detection algorithm, a prototyped defect detection system is built in our laboratory. The architecture of the detection system is illustrated in Fig. 8. Line-scan camera Dalsa SP-14 with pixel resolution of 2048 is used to capture the image of fabrics moving on a conveyor belt. The localization resolution is 0.2 mm/pixel, so each camera can cover 0.4 cm transversal and four cameras are used to cover 1.6 m transversal. An encoder is implemented to synchronize the scan rate of the cameras with the movement speed of the fabrics. The image data of the camera are transferred to the image acquisition and processing card via camera link interface. The processing unit is a Digital Signal Processor (DSP) TI TMSC6713 operating at 300 MHZ and the proposed detection algorithm is implemented in it.
![]() | |
Fig. 8: | The architecture of prototyped defect detection system |
![]() | |
Fig. 9: | Real time performance of the detection algorithm |
All the detection results are uploaded to a host computer for display via PCI bus.
Compared to the generic CPU in host computer, DSP has better real-time performance benefiting from its specialized architecture such as hardware multiplier and instruction pipeline. Here, we only focus on the computational time in the classification phase and the time-consuming procedures in the learning phase such as parameter evaluation of the texture model, SVDD training and cross validation are not taken into consideration, because the learning phase is finished before the real-time inspection, thus usually not time-constrained. The classification phase consists of two parts: feature extraction and decision. The computational time of feature extraction relates to the total number of quantized levels. Feature extraction time for subregion of size 64x64 with different values of L is presented in Fig. 9. For comparison both computational time implemented in DSP and generic CPU P4 3.0 G are presented. The computational time of non-quantized feature extraction with 256 gray-levels is 45844 μs in DSP and 29635 μs in generic CPU. We can find out that the adaptive quantization greatly reduces the computational complexity of feature extraction and improve the real-time performance of the algorithm. The decision involves classification of SVDD classifier which is implemented by software LIBSVM (Chang and Lin, 2001). According to Eq. 8, only the support vectors which usually occupy a small quantity of the training data (Fig. 6b), are involved in the computation of classification. Moreover, LIBSVM uses a look-up table cache for the computation of kernel function in Eq. 7, so that the decision time is greatly reduced and only occupies a small portion in the detection algorithm. It can be seen from Fig. 9 that as L increases, the computational time increases nonlinearly and according to Table 2-4, L increases with the increase of N. However, when N is greater than 3, the detection rate does not increase obviously, so the parameter N = 2 or 3 is a good selection compromising between the detection rate and the real-time performance. The detection speed can achieve as fast as 40 m min-1.
CONCLUSIONS
A new approach of textural fabric defect detection algorithm using SVDD has been demonstrated. A fabric texture model of mixed Gaussian distribution was built on the gray-level histogram of fabric image. Two GLCM features and two novel features were used to characterize the fabric texture pattern and emphasize the presence of the defects. An adaptive quantization method base on the texture model was proposed to reduce the size of GLCM, so that the computational complexity of feature extraction was tremendously reduced. The specific property of each feature was also discussed. User can remove unnecessary ones to further improve the real-time performance. The SVDD classifier was used as a detector and achieved good detection results for three datasets in experiment. A prototyped defect detection system, with high performance DSP as its processing unit, was built to evaluate the real-time performance of the proposed algorithm, experiment indicated that the detection speed can reach as fast as 40 m min-1, thus the proposed algorithm is suitable for on-line inspection in industrial occasions.
ACKNOWLEDGMENT
This study was supported by Open fund of Image Processing and Intelligent Control Key Laboratory of Education Ministry of China.
REFERENCES
- Conci, A. and C. B. Proenca, 1998. A fractal image analysis system for fabric inspection based on box-counting method. Comput. Netw. ISDN Syst., 30: 1887-1895.
CrossRefDirect Link - Bu, H.G., J. Wang and X.B. Huang, 2009. Fabric defects detection based on multiple fractal features and support vector data description. Eng. Applied Artif. Intell., 22: 224-235.
CrossRefDirect Link - Chandra, J.K., P.K. Banerjee and A.K. Dattab, 2010. Neural network trained morphological processing for the detection of defects in woven fabric. J. Text. Inst., 101: 699-706.
CrossRefDirect Link - Mallick-Goswami, B. and A.K. Datta, 2000. Detecting defects in fabric with laser-based morphological image processing. Text. Res. J., 70: 758-762.
CrossRefDirect Link - Latif-Amet, L., A. Ertuzun and A. Ercil, 2000. An efficient method for texture defect detection: Subband domain co-occurrence matrices. Image Vision Comput., 18: 543-553.
CrossRef - Lin, J.J., 2010. Pattern recognition of Fabric defects using case-based reasoning. Text. Res. J., 80: 794-802.
CrossRef - Cohen, F.S., Z. Fan and S. Attali, 1991. Automated inspection of textile fabrics using textural models. IEEE Trans. Pattern Anal. Mach. Intell., 13: 803-808.
CrossRef - Chan, C.H. and G.K.H. Pang, 2000. Fabric defect detection by Fourier analysis. IEEE Trans. Ind. Appl., 36: 1267-1276.
CrossRef - Bodnarova, A., M. Bennamoun and S. Latham, 2002. Optimal Gabor filters for textile flaw detection. Pattern Recog., 35: 2973-2991.
CrossRef - Kumar, A. and G.K.H. Pang, 2002. Defect detection in textured materials using Gabor filters. IEEE Trans. Ind. Appl., 38: 425-440.
CrossRef - Nadhim, A.M.M., 2006. Introduction to multitone multiresolution wavelet analysis. Inform. Technol. J., 5: 494-502.
CrossRefDirect Link - Mingde, B., and S. Zhigang, 2011. Fabric defect detection using undecimated wavelet transform. Inform. Technol. J., 10: 1701-1708.
CrossRef - Yang, X.Z., G.K.H. Pang and N.H.C.Yung, 2002. Discriminative fabric defect detection using adaptive wavelets. Opt. Eng., 41: 3116-3126.
Direct Link - Ohanian, P.P. and R.C. Dubes, 1992. Performance evaluation for four classes of textural features. Pattern Recog., 25: 819-833.
CrossRef - Randen, T. and J.H. Husoy, 1999. Filtering for texture classification: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell., 21: 291-310.
CrossRef - Jianli, L. and Z. Baoqi, 2007. Identification of fabric defects based on discrete wavelet transform and back-propagation neural network. J. Text. Inst., 98: 355-362.
CrossRef - Kumar, A., 2003. Neural network based detection of local textile defects. Pattern Recognit., 36: 1645-1659.
CrossRef - Kumar, A. and H.C. Shen, 2002. Texture inspection for defects using neural networks and support vector machines. Proc. ICIP, 3: 353-356.
CrossRef - Nagarajan, B. and P. Balasubramanie, 2008. Neural classifier for object classification with cluttered background using spectral texture based features. J. Artif. Intell., 1: 61-69.
CrossRefDirect Link - Mahi, H. and H.F. Izabatene, 2011. Segmentation of satellite imagery using RBF neural network and genetic algorithm. Asian J. Applied Sci., 4: 186-194.
CrossRefDirect Link - Bu, H.G., X.B. Huang, J. Wang and X. Chen, 2010. Detection of fabric defects by auto-regressive spectral analysis and support vector data description. Text. Res. J., 80: 579-589.
CrossRef - Tax, D.M.J. and R.P.W. Duin, 2004. Support vector data description. Mach. Learn., 54: 45-66.
CrossRef - Shang, S., X. Kong and X. You, 2011. Copy paper brand source identification using commodity scanners. Inform. Technol. J., 10: 2112-2118.
CrossRef - Sheng, X., P. Qi-Cong and H. Shan, 2010. An invariant descriptor design method based on MSER. Inform. Technol. J., 9: 1345-1352.
CrossRefDirect Link - Haralick, R.M., K. Shanmugam and I.H. Dinstein, 1973. Textural features for image classification. IEEE Trans. Syst. Man Cybern., SMC-3: 610-621.
CrossRefDirect Link - Galloway, M.M., 1975. Texture analysis using gray level run lengths. Comput. Graph. Image Process., 4: 172-179.
CrossRef - Loum, G., C.T. Haba, J. Lemoine and P. Provent, 2007. Texture characterisation and classification using full wavelet decomposition. J. Applied Sci., 7: 1566-1573.
CrossRefDirect Link - Raju, U.S.N., B.E. Reddy, V.V. Kumar and B. Sujatha, 2008. Texture classification based on extraction of skeleton primitives using wavelets. Inform. Technol. J., 7: 883-889.
CrossRefDirect Link - Chu, H., Z. Xie, X. Xu, L. Zhou and Q. Liu, 2011. Inspection and recognition of generalized surface defect for precise optical elements. Inform. Technol. J., 10: 1395-1401.
CrossRef - Sari-Sarraf, H. and J.S. Goddard Jr., 1999. Vision system for on-loom fabric inspection. IEEE Trans. Ind. Appl., 35: 1252-1259.
CrossRef