HOME JOURNALS CONTACT

Asian Journal of Applied Sciences

Year: 2015 | Volume: 8 | Issue: 1 | Page No.: 16-26
DOI: 10.3923/ajaps.2015.16.26
Improving the Annotation Accuracy of Medical Images in ImageCLEFmed2005 Using K-Nearest Neighbor (kNN) Classifier
M.M. Abdulrazzaq and Shahrul Azman Noah

Abstract: Content-based image retrieval systems offer solution to store and search the ever increasing amount of digital images currently in existence. These systems retrieve and extract the images based on low level features, such as color, texture and shape. However, these visual features did not enable users to request images based on semantic meanings. Semantic retrieval is of highly importance in various domains and in particular the medical domain that contain images from various medical devices such as MRI and X-ray. Image annotation or classification systems can be considered as a solution for the limitations of existing CBIR systems. In this study, it was proposed a new approach for image classification using multi-level features and machine learning techniques, particularly the K-Nearest Neighbor (kNN) classifier. We experimented the proposed approach on 9000 images available from the ImageCLEFmed2005 dataset. Principle Component Analysis (PCA) was performed to reduce the feature vectors. The accuracy results achieved 89.32% and 92.99% for the respective 80 and 90% of training images. The results show improvement as compared to previous studies for the same dataset.

Fulltext PDF Fulltext HTML

How to cite this article
M.M. Abdulrazzaq and Shahrul Azman Noah, 2015. Improving the Annotation Accuracy of Medical Images in ImageCLEFmed2005 Using K-Nearest Neighbor (kNN) Classifier. Asian Journal of Applied Sciences, 8: 16-26.

Keywords: image retrieval for medical application, principal component analysis, ImageCLEFmed2005, Feature extraction, k-nearest neighbor and machine learnin

INTRODUCTION

The significant development in the multimedia data field was resulted from the development and advancement in technology (De Azevedo-Marques and Rangayyan, 2013). In 2003, the amount of produced digital information data was approximately 5x1018 bytes (Lyman and Varian, 2003). The medical field also saw an increased in the amount of stored digital information resulted from different imaging modalities, such as Computed Tomography (CT) scans, Magnetic Resonance Imaging (MRI) scans, X-rays, etc., which are produced in enormous numbers daily. For instance, hospitals in Europe have been reported to produce approximately 12,000-15,000 images daily and this number is increasing continuously (Muller et al., 2004). Accordingly, the developments of techniques that facilitate the processing of precise and fast digital medical records prove to be challenging.

The progress in computing and multimedia technologies facilitated the structuring and archiving of images with low cost. As a result, there has been an urgent call to develop systems which can store and manage such images. In this case, techniques in Content-Based Image Retrieval (CBIR) systems enable users to store relevant images in an efficient and effective manner (Lande et al., 2014). In CBIR, images are usually represented as low level features or visual features. The low-level features are extracted automatically by using techniques related to image processing. In this case, CBIR systems typically require users to express their information needs in a very unnatural way in terms of low-level features such as colors, textures or example images Noah and Ali (2010). Such methods of expressing information needs are unsuitable because humans recognize images based on high-level concepts. This called semantic gap problem which is the gap between low level features and high level concepts. Users are familiar with the natural language, such as queries, texts and typical inquiries of images by semantics (Tsai and Hung, 2008).

Machine learning methods have been used to improve image classification systems that map visual features (low-level features) to high-level concepts/semantics (Devillers et al., 2005). Mapping images to high-level concepts is frequently referred to as annotation (Noah et al., 2010). The classification of medical images is important for diagnosis of diseases, surgical planning, medical references as well as teaching purposes, training and research. The classification of images or the automatic annotation of images can be considered as an essential step for searching images in huge databases. Works on classifying and annotating medical images have been reported in various literatures. Rajini and Bhavani (2011) work on MRIs data for automatic diagnosis. Feature extraction and classification are the two main stages provided in a method proposed by them. Dimitrovski et al. (2011) adopted a classifier based on an extension of the Predictive Clustering Trees (PCTs) for classifying images according to the IRMA code. Isola et al. (2011) proposed a technique that aids in the medical diagnosis of the entire centralized database by calculating the occurrence probability of a specific illness from the medical data. The RWTH-i6, the team from the Department of Computer Science in Rheinisch-Westfaelische Technische Hochschule (RWTH) University, Aachen, Germany was the winner for ImageCLEFmed2005 task with a total of only 12.6% misclassified images among 12 teams Clough et al. (2006). Mueen et al. (2007) proposed a method for medical image classification by using kNN and SVM classifiers. The evaluation of the proposed method achieved 82% accuracy rate for for kNN. Charde and Lokhande (2013) proposed algorithm to retrieve the most visually relevant images to a query image. Results show that the texture feature extraction improved the rate of recall and precision and the classification accuracy. Li and Cheng (2009) proposed a new improved technique for the kNN classification algorithm based on object-oriented classification for high-resolution remote sensing images. Inthajak et al. (2011) proposed the use of the kNN algorithm for feature stability along with object detection methods.

Obviously, various medical image retrieval systems have been developed and further enhancement are actively implemented. However, both the accuracy and speed of these systems still need to be improved based on selecting appropriate features representation and developing efficient indexing methods for these features. Therefore, this study, on automatic feature extraction and classification for ImageCLEFmed2005 (Lehmann, 2005). Supervised classification for non-parametric data has been performed by using the kNN classifier. The work presented in this study proposed to use a combination of features mainly, local, global, pixel and Speeded-Up Robust Features (SURF) (Bay et al., 2008) to improve the classification accuracy. Furthermore, we apply the Principle Component Analysis (PCA) procedure to reduce the number of features in order to overcome computing processing cost. The success or the accuracy of feature classification is evaluated by using the correctness rate formula. First, the data were separated into testing and the training sets. Second, classification was applied to the training group. Finally, the classification results are validated by using the testing set.

VISUAL FEATURES

Image classification starts with extracting the appropriate features from the images. Feature extraction is of numerous types, with local and global extraction being the most commonly used (Lisin et al., 2005). Features can be extracted in different ways. Global features are extracted based on the whole image in the average fashion. Local features can be extracted from small sub-images taken from the original image. The low-level features (visual) which include color, texture and shape, are extracted from the images (Mueen et al., 2007).

Color features: This feature is a straightforward, simple and widely used feature in CBIR (Chary et al., 2012). The human eye is more sensitive to the color amount than the gray scale amount in an image. The RGB model, also known as the true color model, is the most commonly used model to measure the similarity of color between two images (Kerminen and Gabbouj, 1999). In a medical image, the participation of color is extremely limited, as numerous medical images are in gray scale. For the representation of color in CBIR, numerous methods are used, including the vector of the color coherence, color correlogram and color histogram which is the most popular technique (Yue et al., 2011).

Texture features: No official definition of texture exists. Numerous perceptions of texture exist but the main texture characteristic is pattern repetition in a region of the image. Such pattern is considered a natural attribute for all surfaces, such as clouds, wood, skin and bricks (Kulkarni and Kulkarni, 2012).

The textures were extracted from pixel group or from regions in contrast to color that was extracted from every pixel. The methods for the analysis of the texture can be divided into two approaches: Statistical and spectral. The features of texture are extracted using the filter base method. The statistical approach explains the idea of texture based on the statistical properties of the texture image. In the spectral approach, the texture description is performed by taking the Fourier transform of the image and then recombining the resultant data to generate the required measurement (Mueen et al., 2008).

Shape features: Shape is considered a powerful and essential feature for the classification of an image. The shape information is extracted from the histogram of the edge detection, whereas the edge information of an image is obtained by using the “Canny edge detection”. Other methods can be used for the extraction of the shape feature: Fourier descriptor, elementary descriptor and template matching. These features explain the spatial information that cannot be explained by the texture and color histograms. The shape contains all the information on the geometry of an object in the image that generally remains unchanged despite changes in the scale, location and orientation of the object. The process for the extraction of the shape features is similar to how humans understand the world (Rui et al., 1999).

To discuss the shape features of the shape, image segmentation should be performed to determine objects or regions. Shape segmentation can be divided into two classes: Region-based and boundary-based shapes. The boundary-based approach uses the information on the boundary of an object (also known as edge detection). The region-based approach uses all shape regions.

METHODOLOGY

The multi-level learning procedure proposed in this study uses an image processing module that is mainly composed of a group of image segmentation, feature extraction and image enhancement approaches. These functions are considered to be very important for image classification or image retrieval systems for their capability to generate a considerable effect on classification or retrieval results. Nine thousand images of 57 classes have been used during the evaluative experiments. The ImageCLEFmed2005 database from the Department of Medical Informatics, RWTH Aachen, Germany was used in this study (Lehmann, 2005). The images in the database include for different genders, ages, view positions and pathologies. All the images scaled to 512x512 bounding box maintaining the original aspect ratio. Based on the IRMA code the 57 categories were defined for 9000 images. Figure 1 presents a thumbnail sample for each of 57 categories together the sequence number of the categories and the number of each images.

In this study, kNN is used to improve the accuracy of ImageCLEFmed2005. The pixel, global, local and SURF features are initially extracted from each image in the database. Subsequently, the extracted features are combined to improve image classification rates. Due to the large number of extracted features, the combined feature vectors are then reduced three times using the PCA procedure to select the best accuracy result. The kNN classifier is then used to identify certain objects from images and then classify them into the respective classes. The following sub-sections discussed the method in details.

Feature extraction: Feature extraction is one of the most critical steps during the indexing of images. The purpose of feature extraction is to reduce the original data set by measuring certain properties of features that distinguish on input pattern from another. As mentioned earlier, feature extraction was based upon the pixel, global, local and SURF features. The low-level features (texture and shape) are extracted from the whole image using the global feature method. For the local feature method, low-level feature extraction is performed for small segmented groups of pixels from the original image for each segment.

Gray Level Co-occurrence Matrix (GLCM) was used for texture feature extraction, co-occurrence matrix is the matrix that defined over an image to be the distribution of co-occurring values at a given offset. In mathematics, a co-occurrence matrix C is defined over an nxm image I, parameterized by an offset (∆x,∆y), as shown in Eq. 1, where i and j are the intensity values of the image, p and q are the spatial positions in the image I and the offset (∆x,∆y) depends on the direction used θ and the distance at which the matrix is computed d. The ‘value’ of the image originally referred to the grayscale value of the specified pixel. Haralick et al. (1973) defined the statistical features of GLCM for texture.

(1)

Histogram of Oriented Gradients (HOG) was used for shape feature extraction, HOG are feature descriptors used for the purpose of object detection in an image. The SURF was used as well in our study for the fourth level of feature extraction, SURF is a robust feature detector and it's scale and rotation-invariant interest point detector and descriptor.

The four types of extracted features used to describe different aspects of an image were combined in this study. Combination of the extracted features is normally used to improve image recognition rates. Generally, global image features tend to be used for most content-based medical image retrieval systems because such features provide the overall structure of an image while local features describes more details of an image.

Feature combination: Nine thousand images are used for this study. The number of features extracted for the pixel, global, local and SURF are 225, 282, 1128 and 150, respectively. Therefore, a total of 1785 features have been extracted as follows:

A feature set of size 225 is produced from the pixel-level information as a result of resizing the image to 15x15
At the global level, the total number of features is 282 from three operations. The gray level co-concurrence matrix is applied to the horizontal, vertical and the diagonal directions, resizing the image to 256x256 and producing 88 features. The wavelet principle is applied by resizing the image to 256x256, producing 64 features
The shape principle is also applied to the image, resizing it to 256x256, followed by two stages of HOG which are the gradient histogram producing 50 features and edge orientation histogram producing 80 features, such that the total number of features at this stage is 130
Local features are extracted by applying the image segmentation principle to each image, all the images divided to four blocks (50x50), the same techniques used at the global level applied on each block, producing 1128 features
SUEF has been applied to the images as the fourth level of feature extraction, producing 150 features.

Feature selection: Some dimensionality reduction techniques have to be performed because the dimensionality of the feature vector is high. One of the best methods for reduction is PCA. The feature vectors have been reduced from 1785 features to 25, 50 and 100 features, separately, to experiment and select the best accuracy result. The PCA method is considered as a competent and a very effective approach for the reduction of dimensionality.

Classification: Classifying images according to some concepts is considered as an image classification process. The main goal of using image classification is to overcome of semantic gap problem. Image pattern recognition is to identify some objects from the images and then used classification methods to categorize these objects into a number of classes or categories. Multi features were extracted from the images to provide more detailed information about the original images. The feature vectors that contain these features help the classifier during the learning model on the training stage to increase the accuracy results.

The segmentation component and feature extraction applied on the test images to create the feature vector as an input during the testing stage. The classifier used in this study is kNN. Based on the learning model from the training set, kNN determines which class the feature vector of the test images fit in.

The kNN classifier is a conventional non-parametric supervised classifier that is said to yield good performance for optimal values of k. Similar to other supervised learning algorithms, kNN algorithm consists of a training phase and a testing phase. In the training phase, data points are given in a n-dimensional space. These training data points have labels associated with them corresponding to their respective class.

kNN algorithm comprises of following stages (Charde and Lokhande, 2013):

Determine a suitable distance metric
During the training phase: Stores all the training data set P in pairs (according to the selected features) P = (yi, ci), i = 1 to n, where, yi is a training pattern in the training data set, ci is its corresponding class and n is the amount of training patterns
During the test phase: Computes the Distances between the new feature vector and all the stored features (training data)
The k nearest neighbors are picked and asked to predict for the class of the new example. The correct classification given in the test phase is used to assess the correctness of the algorithm. If this is not satisfactory, the k value can be tuned until a reasonable level of correctness is achieved

RESULTS

During the evaluation, the data set was randomly partitioned. In this experiment, we used two sets of partitions: 80% for training and 20% for testing (80/20) as well as 90% for training and 10% for testing (90/10). We perform ten times random sampling on the dataset in order to produce reliable results by using kNN classifier with k = 1. These stages were applied with a threshold of 200 images as the maximum per category and without a threshold, using the pixel, global, local, SURF and total features combination separately. Threshold of 200 images was applied due to unbalance number of images in the 57 categories. Example of such unbalance number of images can be seen in Fig. 1, whereby category 12 has 2563 images but category 34 has only 880 images. Accuracy was evaluated by using the correctness rate formula which computes the number of correctly classified images divided by the total number of testing images. Multiple trials were used for the improvement of the accuracy result. PCA has been applied to reduce the number of features from 1785-25, 50 and 100 which are called PCA1, PCA2 and PCA3, respectively. Accuracy has been calculated using PCA1, PCA2 and PCA3 with a threshold and without a threshold. The best accuracy was achieved by PCA2, as shown in Table 1 and 2.

The accuracy result from using the feature combination is better than that from using each one of the features separately, as shown in Table 3 and 4.

Table 1: Accuracy for the reduced features at the percentage of 80% training, 20% testing

Table 2: Accuracy for the reduced features at the percentage of 90% training, 10% testing

Fig. 1: 57 Categories for ImageCLEFmed2005

Figure 2 and 3 show the accuracy results for each of 57 classes without using the threshold for the 80/20 and 90/10 partitions of testing and training data set, respectively.

Fig. 2:
Number of categorized images from each of the 57 classes without using a threshold at the percentage of 80% training, 20% testing

Fig. 3:
Number of categorized images from each of the 57 classes without using a threshold at the percentage of 90% training, 10% testing

Table 3: Accuracy results at the percentage of 80% training, 20% testing

Table 4: Accuracy results at the percentage of 90% training, 10% testing

It is obvious from the figures that the accuracy is high for the categories that containing high number of images, so, we can conclude that as the number of images per category increased the accuracy will increase too.

DISCUSSION

The results show improvement as compared to previous studies for the same dataset. It was proposed methods resulted better accuracy compare to the winner of ImageCLEFmed2005 task RWTH-i6 team, UTD team and MIRACLE team. RWTH-i6 team used sobel filter and tamura texture for feature extraction and kNN classifier, they achieved 87.4% accuracy rate (Clough et al., 2006). UTD team from University of Texas, Dallas TX, USA achieved 31.7% error rate. They scaled all images to 16x16 thumbnails, PCA was applied on the pixel gray values, different weighted kNN used for classification (Muller et al., 2007). The MIRACLE team achieved the forth place for ImageCLEF2008 task (Deselaers and Deserno, 2009). The team used gray histogram features, Gabor features (4 scales, 8 orientations), co-occurrence matrix statistics, DCT coefficients, Tamura features and Discrete Wavelet Transform (DWT) coefficients from 256x256 resized image pixel features. All were resized to 64x64 and the same features were extracted. They used kNN for classification of which the best results was for k = 3. Additionally, we have achieved better results compare to Mueen et al. (2007), they used pixel, global and local for feature extraction to create 490 feature vector, PCA was applied to reduce the feature vector to 30 features, kNN classifier was applied with k = 1, they achieved 82% accuracy rate. Moreover, the problem of misclassified that found in category number 37, 39 and 46 in their study has been solved in our study. The accuracy of all categories in our study has been improved to 89.32% for the 80% training, 20% testing case and 92.712% for the 90% training, 10% testing case without using the threshold. The use of a threshold for the folders that contains more than 200 images was found to have a poor accuracy result as shown in Table 3 and 4. Thus, the removal of this threshold while applying the same conditions for all folders enhances the results.

As presented, four types of features are extracted from each image in the database, pixel, global, local and SURF features. Based on the obtained results, it can be summarized that the use of combined feature vectors offers better performance, accuracy and recognition rates than using single feature. Furthermore, pixel features outperform both the local and global features while the local features offer better accuracy than the global ones.

CONCLUSION

In this study, ImageCLEFmed2005 has been used for evaluation. we examined the advantages that can be achieved by extracting multi-level features, namely, pixel, global, local, and SURF. All of these features were reduced multiple times by using PCA algorithm and then kNN as classifier used for image classification. We conducted experiments with and without a threshold of 200 images as maximum per category, we found that the accuracy achieved without a threshold is better than that achieved with a threshold. Our study shows that the combination of the extracted features reach the highest level of the accuracy. Moreover, the accuracy was improved compared with previous studies. Future studies include the analysis of using other type of classifier with the feature extraction for new accuracy evaluation.

ACKNOWLEDGMENTS

The author would like to thank the Department of Medical Informatics, RAWTH Aachen, Germany, for providing the image data and Prof. Dr. Shahrul Azman bin Mohd Noah, for his guidance and supervision during the writing of this survey. The author also wants to thank everyone who has supported the completion of this study.

REFERENCES

  • De Azevedo-Marques, P.M. and R.M. Rangayyan, 2013. Content-Based Retrieval of Medical Images: Landmarking, Indexing and Relevance Feedback. 1st Edn., Morgan and Claypool Publishers, San Rafael, CA., USA., ISBN-13: 978-1627051415, Pages: 144


  • Bay, H., A. Ess, T. Tuytelaars and L. van Gool, 2008. Speeded-Up Robust Features (SURF). Comput. Vision Image Understand., 110: 346-359.
    CrossRef    Direct Link    


  • Charde, P.A. and S.D. Lokhande, 2013. Classification using K nearest neighbor for brain image retrieval. Int. J. Scient. Eng. Res., 4: 760-765.
    Direct Link    


  • Chary, R.V.R., D.R. Lakshmi and K.V.N. Sunitha, 2012. Feature extraction methods for color image similarity. Adv. Comput.: Int. J., 3: 147-157.
    CrossRef    Direct Link    


  • Clough, P., H. Muller, T. Deselaers, M. Grubinger, T.M. Lehmann, J. Jensen and W. Hersh, 2006. The CLEF 2005 cross-language image retrieval track. Proceedings of the 6th Workshop of the Cross-Language Evalution Forum, September 21-23, 2005, Vienna, Austria, pp: 535-557.


  • Deselaers, T. and T.M. Deserno, 2009. Medical image annotation in ImageCLEF 2008. Proceedings of the 9th Workshop of the Cross-Language Evaluation Forum, September 17-19, 2008, Aarhus, Denmark, pp: 523-530.


  • Devillers, L., L. Vidrascu and L. Lamel, 2005. Challenges in real-life emotion annotation and machine learning based detection. Neural Networks, 18: 407-422.
    CrossRef    Direct Link    


  • Dimitrovski, I., D. Kocev, S. Loskovska and S. Dzeroski, 2011. Hierarchical annotation of medical images. Pattern Recogn., 44: 2436-2449.
    CrossRef    Direct Link    


  • Haralick, R.M., K. Shanmugam and I.H. Dinstein, 1973. Textural features for image classification. IEEE Trans. Syst. Man Cybern., SMC-3: 610-621.
    CrossRef    Direct Link    


  • Inthajak, K., C. Duanggate, B. Uyyanonvara, S.S. Makhanov and S. Barman, 2011. Medical image blob detection with feature stability and KNN classification. Proceedings of the 8th International Joint Conference on Computer Science and Software Engineering, May 11-13, 2011, Nakhon Pathom, Thailand, pp: 128-131.


  • Isola, R., R. Carvalho, M. Iyer and A.K. Tripathy, 2011. Automated differential diagnosis in medical systems using neural networks, kNN and SOM. Proceedings of the 4th International Conference on Developments in e-Systems Engineering, December 6-8, 2011, Dubai, pp: 62-67.


  • Kerminen, P. and M. Gabbouj, 1999. Image retrieval based on color matching. Proceedings of the Finnish Signal Processing Symposium, May 31, 1999, University of Oulu, Oulu, Finland, pp: 89-93.


  • Kulkarni, S. and P. Kulkarni, 2012. Texture Feature Extraction and Classification by Combining Statistical and Neural Based Technique for Efficient CBIR. In: Computer Applications for Bio-Technology, Multimedia and Ubiquitous City, Kim, T.H., J.J. Kang, W.I. Grosky, T. Arslan and N. Pissinou (Eds.). Springer Berlin Heidelberg, New York, USA., ISBN-13: 978-3-642-35521-9, pp: 106-113


  • Lande, M.V., P. Bhanodiya and P. Jain, 2014. An effective content-based image retrieval using color, texture and shape feature. Proceedings of the International Conference on Advanced Computing, Networking and Informatics, June 12-14, 2013, Raipur, India, pp: 1163-1170.


  • Lehmann, T., 2005. IRMA x-ray library. http://irma-project.org.


  • Li, Y. and B. Cheng, 2009. An improved k-nearest neighbor algorithm and its application to high resolution remote sensing image classification. Proceedings of the 17th International Conference on Geoinformatics, August 12-14, 2009, Fairfax, VA., USA., pp: 1-4.


  • Lisin, D.A., M.A. Mattar, M.B. Blaschko, E.G. Learned-Miller and M.C. Benfield, 2005. Combining local and global image features for object class recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 25-25, 2005, San Diego, CA., USA., pp: 47-.


  • Lyman, P. and H.R. Varian, 2003. How much information 2003? http://www2.sims.berkeley.edu/research/projects/how-much-info-2003/.


  • Mueen, A., M.S. Baba and R. Zainuddin, 2007. Multilevel feature extraction and X-ray image classification. J. Applied Sci., 7: 1224-1229.
    CrossRef    Direct Link    


  • Mueen, A., R. Zainuddin and M.S. Baba, 2008. Automatic multilevel medical image annotation and retrieval. J. Digit. Imaging, 21: 290-295.
    CrossRef    PubMed    Direct Link    


  • Muller, H., T. Deselaers, T. Deserno, P. Clough, E. Kim and W. Hersh, 2007. Overview of the imageCLEFmed 2006 medical retrieval and medical annotation tasks. Proceedings of the 7th Workshop of the Cross-Language Evaluation Forum, September 20-22, 2006, Alicante, Spain, pp: 595-608.


  • Muller, H., N. Michoux, D. Bandon and A. Geissbuhler, 2004. A review of content-based image retrieval systems in medical applications-clinical benefits and future directions. Int. J. Med. Inform., 73: 1-23.
    CrossRef    PubMed    


  • Noah, S.A. and D.A. Ali, 2010. The role of lexical ontology in expanding the semantic textual content of on-line news images. Proceedings of the 6th Asia Information Retrieval Societies Conference, December 1-3, 2010, Taipei, Taiwan, pp: 193-202.


  • Noah, S.A., D.A. Ali, A.C. Alhadi and J.M. Kassim, 2010. Going beyond the surrounding text to semantically annotate and search digital images. Proceedings of the 2nd Asian Conference on Intelligent Information and Database Systems, March 24-26, 2010, Hue City, Vietnam, pp: 169-179.


  • Rajini, N.H. and R. Bhavani, 2011. Classification of MRI brain images using k-nearest neighbor and artificial neural network. Proceedings of the International Conference on Recent Trends in Information Technology, June 3-5, 2011, Chennai, Tamil Nadu, pp: 563-568.


  • Rui, Y., T.S. Huang and S.F. Chang, 1999. Image retrieval: Current techniques, promising directions and open issues. J. Vis. Commun. Image Represent., 10: 39-62.
    CrossRef    


  • Tsai, C.F. and C. Hung, 2008. Automatically annotating images with keywords: A review of image annotation systems. Recent Patents Comput. Sci., 1: 55-68.
    Direct Link    


  • Yue, J., Z. Li, L. Liu and Z. Fu, 2011. Content-based image retrieval using color and texture fused features. Math. Comput. Mod., 54: 1121-1127.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved