Subscribe Now Subscribe Today
Research Article
 

Classification of a Class of Agricultural Images Using Multi Guided Multicolor Coherence Feature



P. Balamurugan and R. Rajesh
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

Classification of particular group of agricultural images into semantically meaningful categories is a challenging task. Recently color coherence vector has become popular for image mining. This study makes use of multicolor coherence feature with multiple guide images (MGMCF) for classification of agricultural images like coconut and palm trees. The classification results using neural network is promising. Hence, image mining/image retrieval tasks can be done at good precision/recall by using MGMCF features.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

P. Balamurugan and R. Rajesh, 2012. Classification of a Class of Agricultural Images Using Multi Guided Multicolor Coherence Feature. Journal of Applied Sciences, 12: 1925-1931.

DOI: 10.3923/jas.2012.1925.1931

URL: https://scialert.net/abstract/?doi=jas.2012.1925.1931
 
Received: March 06, 2012; Accepted: July 30, 2012; Published: September 08, 2012



INTRODUCTION

With the rapid development of the internet and the World Wide Web, the amount of digital image data accessible to users has grown enormously. At present the size of image repository is growing in exponential and it is accessed by huge number of applications and users. So, there is a growing need for good image retrieval system in terms of retrieval time and accuracy (Flickner et al., 1995; Huang et al., 1997; Ogle and Stonebraker, 1995; Pass et al., 1996; Smith and Chang, 1996; Pentland et al., 1996; Gudivada and Raghavan, 1995).

The global color distribution in an image can be obtained using color histograms which are popular solutions to retrieve an image from a large database (Swain and Ballard, 1991; Ogle and Stonebraker, 1995). Color histograms are insensitive to small changes in camera positions and liable to false positives since it does not include any spatial information. Several schemes for using spatial information about colors to improve upon the histogram method have been proposed. Smith and Chang (1996) partition an image into binary color sets using histogram back-projection (Swain and Ballard, 1991) and binary color sets along with their location information constitute the feature. Stricker et al. (1996) divide an image into five fixed overlapping regions and extract the first three color moments of each region to form a feature vector for the image.

Huang et al. (1997) propose another color feature for image indexing/retrieval called the color correlogram that take into account the local spatial correlation as well as the global distribution of this spatial correlation. The revised version of color correlogram is called autocorrelogram (Huang et al., 1997), which consider spatial correlation between identical colors only and it requires less space compare to correlogram method. Pass and Zabih (1996) partition histogram bins by the spatial coherence of pixels called as Color Coherence Vector (CCV).

In general, the natural images are more complex and classifying natural images into particular group is more challenging task in image processing (Szummer and Picard, 1998) and there is a high need of good features for image retrieval system. Moreover, classification of agriculture images is the first step in any automatic disease identification system. Hence, this paper introduces Multi Guided Multicolor Coherence Feature (MGMCF) for a class of agriculture images like coconut and palm trees.

COLOR COHERENCE VECTOR

Color coherence (Pass et al., 1996) measures the number of connected similar color pixels. If the size of the similarly connected regions are greater than or equal to threshold (τ) value then they are called coherent regions otherwise they are incoherent. These coherent features are having significant importance in classifying natural images.

Computing a CCV is very simple. First blur the image lightly which eliminates small variations between neighboring pixels. Now quantize the color's space into n (bucket size) distinct colors in the image. Then find similarly connected regions and classify them into coherent and incoherent to form CCV as:

where αj is the number of coherent pixels and βj is the number of incoherent pixels of the jth discretized color, respectively.

Color Image retrieval based on two indexes namely ΔH and ΔG has been proposed by Pass et al. (1996), with the constraints that:

ΔH≤ΔG

Where:

(1)

(2)

where, (α1j, β1j) and (α2j, β2j) correspond to coherent and incoherent pixels of the first image (I1) and second image (I2), respectively.

Later Balamurugan and Rajesh (2007, 2008a, b) proposed ΔI for greenery and non-greenery image classification with the constraints that:

ΔI≤ΔH≤ΔG

Where:

(3)

GUIDED MULTI-COLOR COHERENCE FEATURES

The multi guided multi color coherence features correspond to the color coherence vector of images represented in multi color model with reference to a multiple guide images for the classification of images. For Example, ΔI1 is derived from Eq. 3 with guide image g1,

(4)

where, and corresponds to coherent and incoherent pixels of the guide image (g1) and the kth image (Ik) in the database, respectively. Similarly, ΔH1 and ΔG1 can be calculated for the guide image g1. Thus color coherence feature indexes namely (ΔI, ΔH and ΔG) are calculated for three guide images g1, g2 and g3 thereby forming Multi-guided color feature (ΔI1, ΔH1, ΔG1), (ΔI2, ΔH2, ΔG2) and (ΔI3, ΔH3, ΔG3), respectively. These MGCF, namely:

are calculated for different colors in RGB, HSI and Indexed color spaces and forming Multi-guided Multi-color Coherence Feature (MGMCF). Moreover these MGMCFs are calculated for two threshold values (τ = 50,100) with two bucket sizes (b = 26,64) for RGB and HSI color spaces and with four bucket sizes (b = 26,64,128,256) for indexed color space. Thus forming (9x6x2x2)+(9x1x4x2) = 288 features.

EXPERIMENTS AND RESULTS

The data set consists of 900 images which includes 300 greenery, 300 coconut and 300 palm images of 100x100 pixel size. 50% of images are used for training. The experiment is repeated by interchanging the testing and training set of images.

Mining of coconut images: This section focuses on the mining of coconut tree using Adaptive Neuro Fuzzy System (ANFIS) (Gonzalez et al., 2004; Jahne, 2002; Jang, 1993). Four set of features namely (ΔH, ΔG), (ΔH, ΔI), (ΔG, ΔI) and (ΔH, ΔG, ΔI) in various color spaces, bucket sizes, threshold values and guided images are given as input to the classification system for understanding the mining performance. ANFIS uses a hybrid learning algorithm to identify the membership function parameters of single-output, Sugeno type Fuzzy Inference Systems (FIS). A combination of least-squares and backpropagation gradient descent methods are used for training FIS membership function parameters to model a given set of input/output data. Each input variable is mapped into two membership functions.

Experiment 1: Understanding the classification performance due to the combination of features in RGB color space.

Classification results of greenery and coconut images using guided color coherence vector in RGB color scale with one, average of two and average of three guide images are shown in Table 1. The highest classification rate 89% is obtained for the combination (ΔH, ΔI) with bucket size(n) = 25, threshold (τ) = 50 and three guided images.

Experiment 2: Understanding the classification performance due to the combination of features in HSI color space. Classification results of greenery and coconut images using guided color coherence vector in HSI color scale with one, average of two and average of three guide images are shown in Table 2. The highest classification rate 93% is obtained for the combination (ΔH, ΔG, ΔI) with bucket-size (n) = 63, threshold (τ) = 100 and three guided images.

Experiment 3: Understanding the classification performance due to the combination of features in Indexed color space. Classification results of greenery and coconut images using guided color coherence vector in indexed color scale with one, average of two and average of three guide images are shown in Table 3. The highest classification rate 93% is obtained for the combination (ΔH, ΔG) with bucket size (n) = 256, threshold (τ) = 100 and two guided images.

CLASSIFICATION OF COCONUT AND PALM IMAGES

Experiment 1: Understanding the classification performance due to the combination of features in RGB color space. Classification results of coconut and palm images using guided color coherence vector in RGB color space with one, average of two and average of three guide images are shown in Table 4. The highest classification rate 88% is obtained for the combination (ΔH, ΔI) with bucket size (n) = 63, threshold (τ) = 50 and two guided images.

Experiment 2: Understanding the classification performance due to the combination of features in HSI color space. Classification results of coconut and palm images using guided color coherence vector in HSI color space with one, average of two and average of three guide images are shown in Table 5. The highest classification rate 90% is obtained for the combination (ΔH, ΔG, ΔI) with bucket size (n) = 63, threshold (τ) = 100 and three guided images.

Experiment 3: Understanding the classification performance due to the combination of features in Indexed color space.

Table 1: Classification results of greenery and coconut images using color coherence vector in RGB model with three guide images (GI)

Table 2: Classification results of greenery and coconut images using color coherence vector in HSI model with three guide images (GI)

Table 3: Classification results of greenery and coconut images using color coherence vector in indexed model with three guide images (GI)

Table 4: Classification results of coconut and palm images using color coherence vector in RGB model with three guide images (GI)

Classification results of coconut and palm images using guided color coherence vector in indexed color space with one, average of two and average of three guide images are shown in Table 6. The highest classification rate 87% is obtained for the combination (ΔH, ΔG) with bucket size (n) = 25, threshold (τ) = 100 and three guided images.

NEURAL NETWORK BASED SYSTEM FOR CLASSIFICATION OF IMAGES

This section describes the mining of tree using neural network (Park et al., 2004; Gonzalez et al., 2004; Jahne, 2002). Totally 288 guided color coherent features of different color space, bucket sizes, threshold values and guided images are given as training parameters. These training parameters are used by training algorithm called scaled conjugate gradient to learn from sample and test with 20 neurons for 1000 epochs.

Experiment 1: Classification of greenery and coconut images: The success rate of 98.3% is obtained for the classification of greenery and coconut images as shown in Table 7. The confusion matrix is given in Table 8.

Table 5: Classification results of coconut and palm images using color coherence vector in HSI model with three guide images (GI)

Table 6: Classification results of coconut and palm images using color coherence vector in indexed model with three guide images (GI)

Table 7: Classification results for mining coconut tree images using neural network classifier based on 288 multi guided multicolor coherence feature (MGMCF)

Table 8: Confusion matrix for mining coconut tree images in neural network classifier using multi guided multicolor coherence feature (MGMCF)

It informs that, out of 150 coconut trees, one is misclassified as greenery images. Likewise out of 150 greenery images, 2 images are misclassified as coconut trees.

Experiment 2: Classification of coconut and palm images: The success rate of 96.7% is obtained for the classification of coconut and palm trees as shown in Table 7 and the corresponding confusion matrix is given in Table 8. It seems that out of 150 coconut trees, all are correctly classified as same class and of 150 palm trees, 3 are misclassified as coconut trees.

INTERPRETATION

The above experiments shows that in most of the cases (1) the guided color feature ΔI gives better result along with ΔH or ΔG or both ΔH and ΔG (2) the performance of the classification with average of more than one guide image is better than the single guide and (3) neural network classifier gives prominent result for mining of coconut images. Hence, MGMCF can be considered as good feature for image mining/image retrieval with the help of more than one guide/similar images.

CONCLUSION

Multi Guided Multicolor Coherence Features (MGMCF) are more good at getting higher classification rate due to the presence of guide images which are chosen from target class images. This feature is used for the classification of (1) Coconut trees vs. Greenery images and (2) Coconut trees vs. Palm trees. The result of classification using neural network is promising. Hence, MGMCF feature can be used for guided image retrieval/image mining.

REFERENCES
Balamurugan, P. and R. Rajesh, 2007. Greenery and non-greenery image classification using adaptive neuro-fuzzy inference systems. Comput. Intell. Multimedia Appl., 3: 431-435.
CrossRef  |  Direct Link  |  

Balamurugan, P. and R. Rajesh, 2008. Guided Classification using CCV and Wavelet features for greenery and non-greenery images. Proceedings of the IEEE 6th International Conference on Computational Cybernetics, November 27-29, 2008, Stara Lesna, pp: 163-168.

Balamurugan, P. and R. Rajesh, 2008. Fuzzy logic approach using guided gray level coherence features for greenery and non-greenery image classifcation. Far East J. Exp. Theoritical Artif. Intell., 2: 47-58.

Flickner, M., H. Sawhney, W. Niblack, J. Ashley and Q. Huang et al., 1995. Query by image and video content: The qbic system. Computer, 28: 23-32.
Direct Link  |  

Gonzalez, R.C., E. Woods and L. Eddins, 2004. Digital Image Processing Using MATLAB. 2nd Edn., Prentice Hall, India, ISBN:81-7758-898-2.

Gudivada, V.N. and V.V. Raghavan, 1995. Content based image retrieval systems. Computer, 28: 18-22.
CrossRef  |  Direct Link  |  

Huang, J., S. Kumar, M. Mitra, W.J. Zhu and R. Zabih, 1997. Image indexing using color correlograms. Proceedings of the Conference on Computer Vision Pattern Recognition, June 17-19, 1997, IEEE Computer Society, Washington DC., USA., pp: 762-768.

Jahne, B., 2002. Digital Image Processing. 5th Edn., Springer, Berlin.

Jang, J.S.R., 1993. ANFIS: Adaptive-network-based fuzzy inference system. IEEE Trans. Syst. Man Cybern., 23: 665-685.
CrossRef  |  Direct Link  |  

Ogle, V.E. and M. Stonebraker, 1995. Chabot: Retrieval from a relational database of images. IEEE Comput., 28: 40-48.
Direct Link  |  

Park, S.B., J.W. Lee and S.K. Kim, 2004. Content based image classification using neural network. Pattern Recogn. Lett., 25: 287-300.
CrossRef  |  Direct Link  |  

Pass, G. and R. Zabih, 1996. Histogram refinement for content based image retrieval. Proceedings of the IEEE Workshop on Applications of Computer Vision, December 2, 1996, Sarasota, FL., USA., pp: 96-102.

Pass, G., R. Zabih and J. Miller, 1996. Comparing images using color Coherence vectors. Proceedings of the 4th International ACM Multimedia Conference, November 18-22, 1996, Boston, pp: 65-74.

Pentland, A., R. Picard and S. Sclaroff, 1996. Photobook: Content-based manipulation of image databases. Int. J. Comput. Vision, 18: 233-254.
CrossRef  |  Direct Link  |  

Smith, J.R. and S.F. Chang, 1996. Tools and techniques for color image retrieval. SPIE Proc., 2670: 426-437.
Direct Link  |  

Stricker, M., A. Dimai and E. Dimai, 1996. Color Indexing with weak spatial constraints. SPIE Proc., 2670: 29-40.
Direct Link  |  

Swain, M.J. and D.H. Ballard, 1991. Color indexing. Int. J. Comput. Vision, 7: 11-32.
CrossRef  |  Direct Link  |  

Szummer, M. and R.W. Picard, 1998. Indoor-outdoor image classification. Proceedings of the International Workshop on Content based Access of Image and Video Databases, Januaray 3, 1998, Bombay, India, pp: 42-51.

©  2020 Science Alert. All Rights Reserved