Subscribe Now Subscribe Today
Fulltext PDF
Research Article
Feature Extraction and Classification of Objects in the Rosette Pattern Using Component Analysis and Neural Network

Hadi Soltanizadeh and Shahriar Baradaran Shokouhi

In this approach, a new method for classification and clustering in the rosette pattern is introduced. In the IR (Infrared) rosette pattern seeker, the obtained IR signal would be sampled and samples in the rosette pattern space would be reconstructed and mapped into a new space. Then samples in the new space are clustered and classified and target class will be detected. Finally, target position would be determined and central gravity of target is computed in order to track the target. The advantage of mapping to this new space which we are named BRMS (Binary Rosette Mapping Space) is that the data in BRMS can be clustered, classified and central gravity computed easily. Process time which is an important parameter in RSIS (Rosette Scanning Infrared Seeker) will be improved compared with previous methods and the effect of the rosette pattern nonlinearity will be decreased. In the next step, features of clusters are extracted by PCA (Principle Component Analysis), ICA (Independent Component Analysis) or LDA (Linear discriminant Analysis), instead of clusters size or intensity in the rosette pattern and then features would be classified by MLP (Multy Layer Perceptron), RBF (Radial Basis Function) and ART (Adaptice Resonance Theory) neural networks and the results will be compared.

Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

  How to cite this article:

Hadi Soltanizadeh and Shahriar Baradaran Shokouhi, 2008. Feature Extraction and Classification of Objects in the Rosette Pattern Using Component Analysis and Neural Network. Journal of Applied Sciences, 8: 4088-4096.

DOI: 10.3923/jas.2008.4088.4096



In 1950, for the first time Infrared detectors was used practically. Firstly, a rotating scanning reticle had been used in IR seekers. IR seeker which is placed at the missile`s head, detects the target radiations and provides required information of field of view (FOV) for the servomotors (Jahang et al., 1999a).

Subsequently, spinning scanning seekers with FM modulation is substituted by rotated scanning seekers which have improved detectors with better SNR. Other seekers as improved rotating line scan seeker, crossing spinning scan seeker and rosette scan seeker have created by lapse of time (Pooly et al., 2001).

Rosette scan infrared seeker is a single band detector which is installed on heat tracking missiles and all data and position of the target are extracted by detectors with scanning total field of view and sent to missile control system (Jahang et al., 1999b).

Planes shoot flares to protect themselves from heat tracking missiles. So, classification of detected samples in 2D space is important for recognizing target and flares.

Rosette pattern is created by rotation of two optical elements like prisms, tilted mirrors or lenses in opposite directions. As rotational frequencies for two optical elements are f1 and f2, respectively, the loci of the rosette pattern at an arbitrary time t, in the Cartesian coordinate can be expressed with the Eq. 1 (Jahang et al., 2000).


The equations of rosette pattern in the polar coordinate are as follows:


where, δ is the prisms deviation angle and indicates the size of rosette pattern petal. f1 and f2 have the greatest common divisor F as shown below:



where, N1 and N2 are positive integers. By dividing Eq. 3 to Eq. 4 we can obtain:


Also rosette pattern period time is obtained by Eq. 6,


And the number of petals in the rosette pattern is computed by:


Overlap in the rosette pattern depends on ΔN = N1 − N2 and will be increased if ΔN increments. If ΔN/p 3, there is no overlap in the rosette pattern.

Target tracking is consisting of three steps: 1-Extraction of received data from IR sensor. 2-Data clustering and classification and target and flares detection. 3-Computing target center of gravity and directing missile to the target center of gravity.

In the rosette scanning infrared seeker, while TFOV is scanned by IFOV (Instantaneous Field Of View), pulses are created proportional with the target`s radiation intensity in the time domain, where objects are in TFOV. These pulses would be reconstructed through the rosette pattern with the same rosette scanning parameters. Figure 1a shows a target in the rosette pattern and Fig. 1b shows the generated pulses from the target in the pre-Amp unit. Finally Fig. 1c shows the reconstructed image from the generated pulses.

Since the scanning line density in different pattern locations are not the same, reconstructed image of a target depends on the target location in the rosette pattern TFOV. After reconstructing data in the rosette pattern, data should be clustered for separate objects in the rosette pattern. Until now some general methods are used for clustering data in rosette pattern like Moment, K-Mean and ISODATA which have their own special properties (Jahang et al., 2000).

After data clustering, clusters should be classified to detect the target and tracking it. By using classification methods and target features, target is detected from flares and missile tracks the target. Up to now, some target features like target size or received signal intensity have been used for classification and detecting the target (Shokouhi et al., 2005). But if there were amount of flares in TFOV, this method cannot detect the target correctly.

Fig. 1: (a) A target in rosette pattern TFOV (Total Field Of View), (b) Generated pulses from the target and (c) Reconstructing target from the pulses

In the proposed method, firstly, features of reconstructed image in the rosette pattern would be extracted by PCA, ICA or LDA. Then extracted features would be classified by Neural Networks like MLP, RBF or ART. In this approach, all samples are mapped into the new space named BRMS, because data can be clustered easily.


Since processing time in the seekers is limited and all processes in seekers should be finished in one pattern time period and target should be detected. If the seeker could not detect the target and track it in the suitable time, it will miss the target. So the processing time is an important parameter in the proposed algorithm.

BRMS is a 2D space where the rosette pattern data are mapped. In this space, image size could be decreased and clustering could be done easily. This new space had been proposed before for a simple rosette pattern without overlapping with the name of ALCA (Array Linkage Clustering Algorithm) (Jahang et al., 2002). We used this idea for the overlapped rosette pattern and changed its name to BRMS because of creating a binary image from target or flares. For simplicity, consider a rosette pattern without overlapping. In order to make a rosette pattern without overlapping, N1 and N2 parameters should be selected as ΔN = N1 − N2 ≤ 2 . Figure 2a shows a rosette pattern without overlapping. For mapping to BRMS, each petal of the rosette pattern would be divided into two half petal with a line crossing from the rosette pattern center firstly. Half petals would be numbered in un clockwise direction. If i indicates the half petal number, so i indicates vertical axis in BRMS and is varying from 0 to 2N-1. Where N is the number of petals in the rosette pattern and i is computed by Eq. 8.


In the Eq. 8 θ(t) is obtained from Eq. 2 and indicates the phase of each point in the rosette pattern. Figure 2a, shows none-overlapping rosette pattern with divided petals and numbered half petals.

Samples in each half petals are numbered {0. .. α-1} which α is number of samples in each half petal. In BRMS the horizontal axis is sample number which is normalized to {0 … 1}.

The resolution of generated BRMS images is depends on IR signal sampling frequency. So each row in BRMS indicates one half petal of rosette pattern.

If consider total samples in the rosette pattern NT, so we have:

NT = 2Nα

Fig. 2: (a) Rosette pattern with N1 = 15 and N2 = 1, 3 and (b) BRMS and a mapped target

Figure 2b shows BRMS mapping and a mapped target into the BRMS.

In the case of overlapping rosette pattern, mapping to the new space is similar of none overlapping condition. The rosette pattern is divided into 2N = 2(N1+N2) sectors by lines passing cross points as shown in Fig. 3a and samples in each half petal are numbered as: (0, 1, …, α-1).

Therefore the whole rosette pattern is divided into i = (0 … 2N-1) and forms the vertical axis in the BRMS. Also as like as previous samples of each half petals are numbered j = (0 … α-1) and forms the horizontal axis in BRMS.

Figure 3 shows a target in the overlapping rosette pattern with N1 = 11, N2 = 4 and target mapped into the BRMS.

Fig. 3: (a) A target in the overlapped rosette pattern and (b) Mapped target in the BRMS

Since the density of samples in the curvature in the outer part of rosette pattern is more than inner parts, therefore, samples in the outer part is more compressed than inner part of the rosette pattern. This matter can cause errors in object size in BRMS. If an object placed in the outer parts of rosette pattern, size of object in BRMS is bigger than size of same object in BRMS which is placed in the inner parts of rosette pattern. In order to compensate this error in improved BRMS model, the horizontal axis of the BRMS is replaced by Euclidean distance between samples and rosette pattern center instead of samples number. Therefore nonlinearity effect of rosette pattern will be decreased.

Figure 4a shows a target and a flare in the rosette pattern. Figure 4b and c show mapping of these objects into the BRMS and Improved BRMS respectively.

Fig. 4: (a) A target and flare in the rosette pattern, (b) Mapping target and flare into BRMS and (c) Mapping target and flare into improved BRMS

It is obvious that the target size is 3 times larger than the flare`s (Fig. 4a), but in BRMS the flare size is equal to target`s because the flare is in the outer part of rosette pattern and target is located in the inner part (Fig. 4b). But in the Improved BRMS target and flare size is correct (Fig. 4c). In the last two images the black obvious that the target size is 3 times larger than the flare`s (Fig. 4a), but in BRMS the flare size is equal to target`s because the flare is in the outer part of rosette pattern and target is located in the inner part (Fig. 4b). But in the Improved BRMS target and flare size is correct (Fig. 4c). In the last two images the black.


Classification and target detection can be done easily by considering the target size in the pattern. Since target image is usually greater than flare`s one in the rosette pattern and BRMS as well, we can use the size as a feature for classification of target and flare. The biggest class in BRMS is identified as target. But in some cases which flares are shot as a mass of cloud, flares make a bigger class than target in BRMS and cause to be recognized as a target improperly. Therefore we have to find appropriate features for classification of target from a cloud of flares in BRMS. Component analysis methods are used for feature extraction and the results of extracted features are compared in the implemented neural networks for classification.

Feature extraction using PCA: Principle Component Analysis is a classic method in statistic data analysis, feature extraction and data mining (Mahyabadi et al., 2006; Oravec et al., 2004). In this method, new axis will be defined so that the mapped data of different classes into new axis have maximum variance.

Transformation matrix, V, is computed by PCA algorithm. The size of a BRMS image in comparison with an original image of the rosette pattern was decreased to 80X30 = 2400X1, also we reduced this size of image to 100X1 using PCA which consists information of 97.56% from the original image. Therefore 100, eigenvectors related to 100 highest eigenvalues which are normalized to 1 are stored as V. In the following equation, A is a matrix of 250 training images and R is a new matrix of training images features with dimension 250X100.


For example 80 images of first testing set images are in matrix T line and features are extracted by:

Rtest(PCA) = TV

Feature extraction using ICA: Independent Component Analysis is a method to find creator components of data (Chengjun, 2004; Hyvärinen et al., 1999). But, ICA finds components which are independent statistically and are non-Gaussian.

Consider two signal S1 and S2 are two sources and other two signal as X1 and X2 are two combined signal as in Eq. 13:

X1 = V1S1+V2S2
X2 = U1S1+U2S2

By using ICA algorithm, S1 and S2 signals are computed and by computing V and U vectors, features of X1 and X2 signals will be computed.

For running ICA, we used Fast Fix Point Algorithm which is discussed in (Bartlett et al., 2002). At first, images dimensions 80X30 pixels and would be changed to 2400X1 (one dimensional matrix). Then for preprocessing, PCA and whitening is used for learning and testing images. With using PCA, the image dimension is reduced from 2400X1 to 100X1. Then separating matrix W is computed by fast fix point algorithm for 250 testing images. So we have:


In this equation, W is a separating matrix in 250X250. R(PCA) is PCA images and R(ICA) is a image matrix with the size of 250X100 which features of ICA images are in the matrix rows.

And for testing image we have:

Rtest(ICA) = W.R(test)

Feature extraction using LDA: Linear Discriminant Analysis is available in some conditions. M, the minimum number of training samples, should be greater than N+c, where N is the number of elements in the data vectors and c is the number of classes (Ren et al., 2005) (M>N+c).

In this approach, there are two classes, Target and Flare (c = 2). And also N = 2400. While the number of training samples (M) is 250 and 250 < 2402. Therefore at first we should apply PCA, to reduce data vector to (N = 100). Then with (250 > 102) LDA can be run.

After running LDA as mentioned in reference (Ren et al., 2005), LDA returns one real value for each input image. The mean value of computed result of LDA in training phase was 0.028 and 0.0088 for target and flares respectively and for testing phase was 0.0308 and 0.0172, respectively too.

For image classification using LDA, the result of LDA can be compared with these mean values and recognized object as a target or flare.


For image classification, using extracted features by PCA or ICA, we selected three neural networks, MLP because of simplicity, RBF because of high precision and ART as an unsupervised neural network.

First, networks would be trained using extracted features for 250 training phase images. Second, the networks are tested by images which they have target and flares the same as training phase images and third, the networks would be evaluated by second testing set images which have target and flares with different size and features.

MLP neural network: An MLP network, as any type of back-propagation network can consist of many neurons, which are ordered in layers. The neurons in the hidden layers do the actual processing, while the neurons in the input and output layer merely collect distribute and the signals respectively.

The MLP network is trained by adapting the weights. During training the network output is compared with a desired output. The error between these two signals is used to adapt the weights. This rate of adaptation is controlled by the learning rate. A high learning rate will make the network adapt its weights quickly, but will make it potentially unstable. Setting the learning rate to zero, will make the network keep its weights constant.

RBF neural network: In this approach, we use this network for classifying extracted features of images. Figure 5 shows the total view of network. X is the input layer of network.

In this network G is the green function which is a Gaussian function and is expressed by:


where, Xi is the Gaussian function center and σ is the Gaussian function variance.

The algorithm of this network is consisting of three parts: (1) Initializing, (2) Learning and (3) Testing.

In the training section there are two parameters which should be trained. (1) Weights and (2) Gaussian Function center. Different inputs are applied to the network and network outputs are computed. Then network outputs would be compared with ideal outputs and errors would be computed. So weights and Gaussian Function Center are adapted using errors in order to decrease them.

ART neural network: Before applying input data to ART network, some preprocessing should be done, as normalizing, coding and etc (Mahyabadi et al., 2006; Frank et al., 1998). Consider the number of input elements is as m and the number of classes is n, so in the first layer, input data is compared with the saved pattern and if the similarity of input data and saved pattern is greater than ρ, then this pattern and the output cell will be won.

Fig. 5: RBF neural network

Fig. 6:
(a) A competitive learning network. Input layer F1 adopts the values of input pattern I. A winner-take-all output layer F2 indicates the according cluster for I, by the position of its one and only activated neuron J and (b) A simplified representation of the competitive learning network from Fig. 2. All inputs and outputs of F1 and F2 are united in one arrow for any input or output vector. The adaptive weight-matrix Wij of all connections between layer F1 and F2 is replaced by the. /-symbol

On the other hand if the similarity is less than ρ, then a new class will be created and the input data is considered as class pattern. Also if one of classes have suitable similarity with input data, class pattern will be adapted with input data (Jahang et al., 1999a, b; Pooly et al., 2001; Joo et al., 2002). ART network has a competitive learning structure (Fig. 6). m input neuron in F1, register the input data X = (x1, …, xm). Each neuron in output layer F2, have an activity as tj. Activity vector T = (t1, …, tn) is computed by comparing input data I and saved patterns W1 = (w11, ..., w1m), ..., Wn = (wn1, .., wnm).

These patterns are weighted links between F1 and F2. In this competitive process, just one neuron wins (Jth) and the other neurons are loser. Therefore the network output is:


And also for computing tj, we have:


So the output neuron with the greatest tj is a winner neuron and Wj should be adapted with input data. So we have:


Weights are initialized by random values. Essentially competitive networks will be unstable if there are more distances between input data. In this case there is no control on number of generated output classes.

There are some returning links from F2 to F1. The weights of these links are W. Since all of the output value of network is zero and just one of the output neuron is nonzero, the only Wj will return to F1. So we have:



Used database was a set of 410 images of target and flares in BRMS, with dimension 80X30. There were 250 images for training phase (125 images for target and 125 images for flares) and 80 images for first testing set (40+40) and 80 images (40+40) for second testing set.

Circle shapes were used for target images in the simulations with 6 different radiuses: 0.1, 0.2, 0.3, 0.4, 0.6 and 0.8 (times TFOV radius). Different radiuses of targets show different distances of targets from seeker. In each case, target and flares are located in different positions in TFOV. Also flares are simulated by a sequence of circles with radius 0.1 of target size. These sequences have random position in TFOV.

Table 1: Classification results using MLP

Table 2: Classification results using RBF

Table 3: Classification result ART

Table 4: Classification results using LDA

There are two type of testing images. First testing set consists of 80 images of target and flares. In these images, target and flares is the same as target and flares in the training set respectively. But images in second testing set, have a different size of target and flares. Also we used rosette pattern with N1 = 11, N2 = 4 and T = 40 msec. IFOV in used pattern was 0.25 (TFOV). Below tables show the results of different used networks.

The obtained results with MLP, RBF, ART and LDA are shown in Table 1-4. It is revealed that both RBF and ART networks perform better than the others with the first testing database. On the other hand, when the testing database structure is different from the learning database structure, RBF presents the best performance.

ART uses a minimum required similarity between patterns that are grouped within one cluster or vigilance parameter that should be specified at the beginning. So if the input data structure changed, the performance of this network is not feasible because of the fixed similarity parameter.

In the MLP networks, the training process makes the network to define its separating lines for classification. If the testing database structure is different from the learning database structure, the performance of this network is weak. On the other hand, RBF creates a nonlinear mapping between the latent space and the data space. The nodes of the RBF network ten form a feature space and the nonlinear mapping can then be taken as a linear transform of this feature space to classify. So the database with nonlinear structure can be classified by this net. In addition, RBF network is more suitable for classification in the BRMS.

Table 5: Processing time for different target detection methods

Also the results in Table 1 and 2 show, when the features are extracted by PCA, the accuracy of the nets performance is better than the one with the ICA. PCA is an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate, the second greatest variance on the second coordinate and so on. The new components are correlated with the original components but not with each other; that is, so that they are now independent of each other. On the other hand, ICA finds the independent components by maximizing the statistical independence of the estimated components. It is a computational method for separating a multivariate signal into additive subcomponents supposing the mutual statistical independence of the non-Gaussian source signals. Therefore, PCA presents the superior features to classify.

In addition, the processing time of the proposed method is compared with the previous methods in Table 5. For improved ISODATA which was the best method of target detection in the rosette pattern (Shokouhi et al., 2005), processing time has been computed. Processing time is started from generated IR signal to detect the target and compute target center of gravity (Jahang et al., 1999b, 2000). One target and 5 flares which are shoot consecutively to comparing different methods and algorithms processing time.

Seeker is attached with an object which is expected to reach the target and the object adjusts the direction according to the output of the seeker. By considering the distance between seeker and target, the system is tested in three state as follow as:

Seeker is at the start time. In this position target and flares are seeing as small point in TFOV.

Seeker is in the middle of way. So target is about 0.1 (TFOV) and flares are 0.01 (TFOV).

Seeker is at the end of way and near to the target. So target is too big and can fill the rosette pattern and flares are very small.

Table 5 shows processing time for ISODATA and also BRMS methods. For compute methods processing time and compare them, we used a simulator which is written in Delphi. Simulations were tested by Pentium IV processor at 2 GHz and for timing less than 1 msec, simulation has been run 1000 times and computed time was divided to 1000. All other conditions were the same for all methods.


In this approach a new space which was named BRMS was introduced. All data in the rosette pattern are mapped into BRMS. Clustering, classifying and computing target center of gravity in BRMS could be done easily. Since using iterative methods for clustering is time consuming, processing time in the rosette pattern is more than BRMS.

Also processing time in the rosette pattern depends on target location. If seeker was near to the target, target image size in the simulation will be increased and target fill the rosette pattern, then amount of data would be increased and more time for data processing would be needed. Processing time in BRMS is independent of target location while processing time should be constant from start to end of tracking line.

For classification, in the proposed method PCA, ICA LDA and Neural networks has been used instead of using signal intensity and cluster size in order to detect target and flares. Therefore, when a cloud or mass of flares are shot, the proposed method can detect the target through the flares.


This work is supported by ITRC, Iran Telecommunication Research Center, Ministry of I.C.T.

Bartlett, M.S., J.R. Movellan and T.J. Sejnowski, 2002. Face recognition by independent component analysis. IEEE Trans. Neural Networks, 13: 1450-1464.
CrossRef  |  Direct Link  |  

Chengjun, L. 2004. Enhanced independent component analysis and its application to content based face image retrieval. IEEE Trans. Syst. Manuf. Cybernetics, 34: 1117-1127.
CrossRef  |  Direct Link  |  

Frank, T., K.F. Kraiss and T. Kuhlen, 1998. Comparative analysis of Fuzzy ART and ART-2A network clustering performance. IEEE Trans. Neural Networks, 9: 544-559.
CrossRef  |  Direct Link  |  

Hyvarinen, A. and E. Oja, 2000. Independent component analysis: Algorithms and applications. Neural Networks, 13: 411-430.
CrossRef  |  

Jahang, G., H.K. Hong and J.S. Choi, 1999. Simulation of rosette infrared seeker and counter-countermeasure using K-means algorithm. IEICE Trans. Fundamentals Elect. Commun. Comput. Sci., E82-A: 987-993.
Direct Link  |  

Jahang, S.G., H.K. Hong, D.S. Seo and J.S. Choi, 2000. New infrared counter-countermeasure technique using an iterative self-organizing data algorithm for the rosette scanning infrared seeker. Optical Eng., 39: 2397-2404.
CrossRef  |  Direct Link  |  

Jahang, S.G., H.K. Hong, S.H. Han and J.S. Choi, 1999. Dynamic simulation of the rosette scanning infrared seeker and an IRCCM using the moment technique. SPIE Optical Eng., 38: 921-928.
CrossRef  |  Direct Link  |  

Joo, M., E.S. Wu, J. Lu and H.L. Toh, 2002. Face recognition with Radial Basis Function (RBF) neural networks. IEEE Trans. Neural Networks, 13: 697-710.
CrossRef  |  Direct Link  |  

Mahyabadi, M.P., H. Soltanizadeh and Sh.B. Shokouhi, 2006. Facial detection based on PCA and adaptive resonance theory 2A neural network. Proceedings of the IJME-INTERTECH International Conference, October 19-21, 2006, Kean University, New York, USA., pp: 501-509.

Oravec, M. and J. Pavlovicova, 2004. Face recognition methods based on principal component analysis and feedforward neural networks. IEEE Proc. Neural Networks, 1: 437-444.
Direct Link  |  INSPEC

Pooly, J.U. and F.G. Collin, 2001. HILS testing: The use of PC for real time IR reticle simulation. Proceedings of the SPIE Conference, April 16, 2001, Orlando, FL, USA., pp: 417-424.

Ren, H. and Y.L. Chang, 2005. Feature extraction with modified fisher’s linear discriminant analysis. Proc. SPIE, 5995: 56-62.
CrossRef  |  Direct Link  |  

Shokouhi, S.B., A.K. Momtaz and H. Soltanizadeh, 2005. A new weighting and clustering methods for discrimination of objects on the rosette pattern. WSEAS Trans. Inform. Sci. Appl., 2: 1250-1257.
Direct Link  |  

©  2019 Science Alert. All Rights Reserved
Fulltext PDF References Abstract