INTRODUCTION
In order to operate multi micro objects under microvision, it is necessary
that identifies firstly these objects. In pattern recognition field, the
feature of image shape is an important object when extracts feature. Moment
feature is one of the shape feature that be used in extensive application.
The most basic twodimensional shape features have a direct relationship
with the moment. Image gravity center, the long axis and the short axis
inertia moment and a number of very useful invariant moments can be computed
directly from the moment. Invariant moments are the statistical properties
of image, meeting that the translation, reduction and rotation are invariance,
which has been used widely in the field of image recognition. An automatic
method for generating affine moment invariants (Liu et al., 2007).
For closed structure and not closed structure, because the moment feature
can not calculate directly, it need construct firstly regional structure
(Chen, 1993). Besides, because the moment involves in the calculation
of all the pixels of intraregional and border, it means that it can be
more timeconsuming. Therefore, we apply the edge extraction algorithm
to process image firstly and then calculate the edge image`s invariant
moments to obtain the feature attribute, which solves the problem discussion
above.
After feature attribute extraction, the classification algorithm should
be provided during the final target identification. The main classifier
used at present can be divided into three categories: One is the method
statisticsbased and its representative is such as the bayes methods,
KNN method, like centre vector and SVM and so on; One is the method rulebased
and its representative is decision tree and rough sets. The last one is
the method based on artificial neural network. Being SVM algorithm is
a convex optimization problems (Emanuela et al., 2003), its local
optimal solution must be global optimal solution, which is better than
the others learning algorithms. So, For solving the small sample, nonlinear
and highdimensional pattern recognition, SVM has many good performances
and advantages (Jose et al., 2004; Yi and James, 2003), especially
in the text SVM classification. Therefore, we employ SVM classification
algorithm to classify the targets. However, the classic SVM algorithm
is established on the basis of the quadratic planning. That is, it can
not distinguish the attribute`s importance from training sample set. Bas
presents a method that omits the insignificant training data based on
pruning error minimization in least squares support vector machines. In
additional with, It is high time to solve for the large volume data classification
and time series prediction, which must improve its realtime data processing
and shorten the training time and reduce the occupied space of the training
sample set.
For the problem discussion above, presents an improved support vector
machine classification, which applies edge extraction`s invariant moments
to obtain object`s feature attribute. In order to enhance operation effectiveness
and improve classification performance, a feature attribute reduction
algorithm based on rough set has been developed (Leung et al.,
2008), with the good result to distinguish training data set`s importance.
The experiments of multi micro objects identifying using the propose method
have been given. The results show that the improved SVM classifier can
meet the application requirements, with the resolution of 95%.
INVARIANT MOMENTS THEORY
In the pattern recognition field, the shape feature of image is an important
feature target for extraction. Some basic twodimensional shape features
have a direct relationship with the moment. Because the invariant moments
have many advantages such as translation, reduction and rotation invariance,
Therefore, we employ the invariant moments to describe the feature attributes
of image.
Image (p+q) order moments: we presume that f (i, j) represents the twodimensional
continuous function. Then, it`s (p+q) order moments can be written as
(1).
In terms of image computation, we use generally the sum formula of (p+q)
order moments shown as (2).
where, p and q can choose all of the nonnegative integer value, they
create infinite sets of the moment. According to papulisi`s theorem, the
infinite sets can determine completely twodimensional image f (i, j).
For the binary image, if its background value is 0 and the region value
is 1, zeroorder moment can represent area of the shape region. So, we
can obtain the result from image moment divided by zeroorder moment,
it has the invariance of the shape scale changes.
Image (p+q) order center moment: In order to ensure location invariance
of the shape feature, we must compute the image (p+q) order center moment.
That is, calculates the invariant moments using the center of object as
the origin of the image. The center of object (i`, j`) can obtain from
zeroorder moment and firstorder moment. The centremoment formula can
be shown as (3).
If we use the area of the shape to normalize the center moment, namely,
use M_{pq}/M^{r}_{00} to replace M_{pq},
then the invariant moments obtained can meet the independence of the scale
changes of the shape.
At present, most studies about the twodimensional invariant moments
focus on extracting the moment from the full image. This should increase
the computation amount and can impact on the realtime of system. Therefore,
we propose the invariant moments method based on edge extraction, which
gets firstly the edge image and then achieve the invariant moments feature
attribute. Obviously, it keeps the region feature of moment using the
propose method. In addition, being the role of edge detection, the data
that participate calculation have made a sharp decline, reducing greatly
the computation amount.
The invariant moments is the function of the seven moments, meeting the
invariance of the translation, rotation and scale. The calculation formula
is shown in (4).
IMPROVED SUPPORT VECTOR MACHINE AND TARGET IDENTIFY
Support vector machine theory: SVM method is a machine learning
method in statistical learning theory and it builds on the VC theory and
the principle of structure risk minimization (Bas and Theo, 2003; Jing
et al., 2003). The method finds the best middle course between
the complexity of the model and learn ability, which expects to obtain
better generalization ability. The basic idea of SVM is that applies a
nonlinear mapping Φ to map the data of input space into a higher
dimensional feature space and then does the linear classification in this
highdimensional space.
Presumes that the sample set (x_{i}, y_{i}), (i = 1,...,
n), x ∈ R_{d} can be separated linearly, where x is d dimensional
feature vector and y ∈ {– 1, 1} is the class label. The general
form of judgement function in its linear space is f (x) = wgx + b, Then, the
classification hyperplane equation can be shown as (5).
If class m and n can be separated linearly in the set, there exists (w,
b) to meet formula as (6).
where, w is weight vector and b is the classification threshold. According
to (5), if w and b are zoom in or out at the same time, the classification hyperplane
in (5) will keep invariant. We presume that the all sample data meet f(x) ≥
1 and the samples that is closest classification hyperplane meet f(x) = 1,
then, this classification gap is equivalent to 2/w. So the classification
gap is biggest when w is minimum.
Improved support vector machine: For the completion of the sample
training, it is a usually method that all the feature attribute values
after normalization have been used for modeling, which will increase inevitably
the computation amount and may lead to misjudge the classification system
being some unnecessary feature attributes. Therefore, bringing a judgement
method to distinguish the attribute importance may be necessary for us.
So we employ rough set theory to complete the judgement for samples attribute`s
importance. Then, we carry out SVM forecast classification based the reduction
attributes (Andrew and Srinivas, 2003; Kaibo et al., 2002; Mantero
et al., 2005).
Now, we introduce rough set theory. The decisionmaking system is S =
(U, A, V, f), where U is the domain with a nonnull limited set and A
= CUD. C, D represents conditions and decisionmaking attributes set respectively.
V is the range set of attributes
V_{a} is the range of attribute a. f is information function (f:UXA
→ V). If exists f (x, a) ∈ V_{a} under ∀ x ∈
U a ∈ A and ∀ B⊆A is a subset of the conditions attributes
set, we call that Ind(B) is S`s undistinguish relationship. Formula Ind(B)
= {(x, y) ∈ UXU ∀ a ∈ B, f(x, a) = f(y, a)} represents
that x and y is indivisible under subset B. Given X⊆U, B(x_{i})
is the equivalent category including x_{i} in term of the equivalent
relationship .
We can define the next approximate set and
the last approximate set of
subset X as follows:
If there is
the set X is able to define set based on B. Otherwise, call X is the rough
set based on B. The positive domain of X based on B are the objects set
that can be determined to belong to X based knowledge B. Namely, POS_{B}(X)
= B(X). The dependence of decisionmaking attributes D and conditions
attributes C can be defined as follows.
where, card(X) is the base number of the set X.
The attributes reduction of rough set is that the redundant attributes
have been deleted but there is not loss information (Richard and Shen,
2004, 2007). The formula R = {R R⊆C, γ(R, D)= γ(C, D)}
is the reduction attributes set (Yu et al., 2006). Therefore, we
can use equation attributes dependence as conditions for terminating iterative
computing.
In order to complete the attribute reduction, we present a heuristic
attribute reduction algorithm basedon rough set`s discernibility matrix,
which applies the frequency that attributes occurs in matrix as the heuristic
rules and then obtains the minimum attributes`s relative reduction.
The discernibility matrix was introduced by Skowron and has been defined
as (7):
According to formula (7), The value of elements is the different attributes
combination when the attributes for the decisionmaking are different
and the attributes for the conditions are different. The value of elements
is null when the attributes for the decisionmaking are same. The value
of elements is 1 when the attributes for the decisionmaking are same
and the attributes for the conditions are different.
If p (a) is the attribute importance formula of attribute a, we can propose
the formula as (8) according to the frequency that attribute occurs:
where, γ is the general parameter and c_{ij} are the elements
of the discernibility matrix. Obviously, the greater the frequency that
attribute occurs, the greater its importance. Therefore, we can compute
the importance of attributes and eliminate the attributes that it`s importance
is the smallest using the heuristic rules in formula (8). And then, we
can obtain the relative reduction attributes.
Now, we give the heuristics attribute reduction algorithm basedon rough
set`s discernibility matrix.
Input: The decisionmaking table (U, AUD, V, f)
Output: The relative attribute reduction
Algorithm steps:
• 
Computes the identification discernibility matrix M 
• 
Determines the core attributes and find the attributes combination
that the core attributes is not included 
• 
Obtains conjunctive normal form P = ∧( ∨ c_{ij} :
(i = 1, 2, 3...s; j = 1, 2, 3...m)) of the attributes combination
by step (2), where c_{ij} are elements of each attribute combination.
And then converts the conjunctive normal form to disjunctive normal
form 
• 
Determines the importance of attribute according to formula (8) 
• 
Computes the smallest importance of attributes by steps (4) and
then eliminate the less importance attribute to obtain the attributes
reduction 
After the attribute has been reduced, the samples feature attributes
will send to SVM for establishing model. Finally, we can finish the classification
of the final prediction data (Xiu and Yu, 2006; Wang et al., 2007).
MATERIALS AND METHODS
Feature extraction and data pretreat: The main task of classification
is to identify and classify the manipulator (microgripper, vacuum suction)
and operation targets (cylindrical metal part, glass ball), which can
provide convenience for followup visual servo task. Figure
1 shows the original image of operation targets and manipulator in
microscopic environment.
Usually, when the moment feature is calculated, operation is conducted
on a full gray image. Then, a lot of pixel points must be processed, meaning
that a mass of calculation data need be participated. In additional with,
the feature attribute of the invariant moments can not also descript fully
the shape feature of object. Therefore, we pretreat the classified objects
with edge extraction algorithm, which makes data in a sharp decline and
reduces greatly the computation amount. The edge information of object
can also express correctly the shape feature of the object. Figure
2 is the image after edge extraction of operation targets and manipulator
in microscopic environment.
Table 1 gives the feature attribute`s normalization
value of four different objectives using invariant moments algorithm.
We compute the feature attribute of objects in all directions and only
list one of the feature attribute.
Result of identification and analysis: We compare firstly the
data classification effectiveness on a number of micro objects using the
traditional support vector machines algorithm and rough set + SVM and
the results are shown in Table 2.

Fig. 1: 
The original image of operation targets and manipulator
in microscopic environment 

Fig. 2: 
The image using edge extraction of operation targets
and manipulator in microscopic environment 
Table 1: 
The feature attribute`s normalization value of different
objects using invariant moments algorithm 

Table 2: 
The comparison results of using two classification
methods 

Table 3: 
The comparison results of classification accurateness
using SVM and SVM+rough set classification 

According to Table 2, the correction rate of classification
based on the proposed SVM classification algorithm has been over 95%,
being higher than the single SVM algorithm`s correction rate. So, we can
draw the conclusion that the attribute reduction improves the classification
ability. Besides, compared with the single SVM algorithm`s calculation
time, It can be seen clearly from Table 2 that the calculation
time of the proposed algorithm is less than about five times, meaning
that the system becomes more effective.
Then, Table 3 provides the comparison results of classification
accuracy using SVM classification and SVM+rough set classification with
joining the other 25 feature attributes (gray, area, perimeter, texture,
etc.). In Table 3, the number of conditions attributes
of the final classification for entering to SVM is 14.25, less than 25
features attribute. Thus it simplifies the followup SVM forecast classification
process.
CONCLUSION
For identifying multi micro objects, an improved support vector machine
classification algorithm is present, which employs invariant moments based
edge extract to obtain feature attribute and then presents a heuristic
attribute reduction algorithm based on rough set`s discernibility matrix
to obtain attribute reduction. According to the feature attributes, the
effectiveness of identifying multi micro objects using support vector
machine is compared with the proposed improved support vector machine
classification method. The results show that the improved support vector
machine classification method can meet the application requirement, with
the resolution is 95%, which provides a visual servoing possible for operating
multi micro objects in microscope vision environment.
ACKNOWLEDGMENTS
This study is supported by the National Nature Science Foundation of
China under Grant 60275013 and China 863 Program and the key Lab of image
processing and intelligence control.