INTRODUCTION
Wireless Sensor Network (WSN) is a collection of smart, autonomous sensor nodes
which are equipped with heavily integrated sensing, processing and communication
capabilities (Akyildiz et al., 2002). These nodes
self organizes the network with multihop mode. Wireless Sensor Network (WSN)
have an important practical value in the military, environmental monitoring,
industrial control, intelligent home and urban transport, etc. It has become
a hot research area in recent years (Chang and Tassiulas,
2004). These sensor nodes are often deployed in areas that are unreachable
by humans and thus the life time of the network which they create gets limited
to the battery of these sensor nodes. Due to the energy, computing power, storage
capacity and communication capacity, WSN is the complex system. Energy consumption
is the most important factor that determines sensor node lifetime (Dai
et al., 2009). The optimization of wireless sensor network lifetime
targets not only the reduction of energy consumption of a single sensor node
but also the extension of the entire network lifetime (Guo
et al., 2010). In wireless sensor network, all nodes are energy constrained
and designing an energy efficient routing is very important. It is difficult
to use a unified model to describe the wireless sensor network node. However,
neural network model can easily describe the complex system. This study describes
WSN nodes and network model based on neural network model.
THE WSN MODEL
WSN node neuron model: WSN node neuron model is shown in Fig. 1. q is the node data fusion; S_{1}, S_{2},... S_{n} for the sensor nodes to collect information; ω_{1}, ω_{2},... ω_{n} for the weight value; θ for the threshold. Relationship between input and output nodes in accordance with the following formula:
WSN node control model: Similar to the neural network control model, WSN node control model based on neural model is shown in Fig. 2.

Fig. 2: 
WSN Neuron control model 
Weighted adder:
The input of the sensor nodes is expressed as W_{i}:
Among the formula (3), A_{i}, B_{i} is the connection matrix, then:
Linear dynamic functions:
where, X (s), V (s), H (s) is the Laplace transform from X_{i} (t), v_{i} (t), h (t). h (t) is linear dynamic function of the impulse response.
Static nonlinear function:
WSN node connection model Adjih et al. (2005):
There is some wireless sensor network model with neural network. From the Eq.
2 and 3, suppose neurons are static, that is H (s) = 1.The
neuron can be expressed as:
X is Ndimensional vector. g(.) is a nonlinear function. A, B is the connection matrix. The WSN connectivity for three neural networks can be expressed as:
The superscript expresses the level.
First level is general node level:
Second level is the sink node layer:
Third level is the user node layer:
HOPFIELD NETWORK ENERGY FUNCTION
Sensor network node energy is extremely limited. In order to prolong the life
of the network and the whole system, all the information processing strategies
must reduce node energy consumption as far as possible. (Paradis
and Han, 2007). Hopfield neural network is an artificial neural network
model to optimize computing, associative memory, pattern recognition and image
restoration, etc. (Chessa and Santi, 2001). Hopfield
energy function is a reflection of the overall state of multidimensional neuronal
scalar function (Rai et al., 2009). It can form
a simple circuit of artificial neural networks which adopts a parallel computing
interconnected mechanism . Hopfield is built on energy with the same Lyapunov
function. Hopfield considers that the internal stored energy is gradually decreased
with time increases in the system movement process. When the movement to equilibrium,
the energy of the system runs out or becomes miniature (Dai
and Wu, 2004). The system will naturally balance and go to stabile state.
Therefore, the system can solve the stability problem if we can find a complete
description of the process of energy function. For continuous feedback network
circuit, the state equations is:
When the system reaches steady output, Hopfield energy function is defined as:
The general method of energy function: Suppose the optimization objective
function is !f (u), uεR^{n} is the state of artificial neural network
which is also the objective function of the variable (Intanagonwiwat
et al., 2003). Optimizing constrained conditions is g (u) = 0 (Raei
et al., 2009). Optimization problem is to meet the constraint conditions
and to minimum objective function. The equivalent minimum energy function E
is expressed as:
where, g_{i} (u) is penalty function. When g_{i} (u) =
0 is not satisfied, the value Σg_{i} (u) is always greater than
zero. According to Hopfield energy function and gradient descent, E is limited
in the negative direction, E<E_{max}, dE/dt≤0 (Luo
et al., 2006). System can always reach the final minimum E and dE/dt
= 0 which is the stability point du_{i}/dt = 0. When solving optimization
problems, E is often a function of the status u, so dE/dt≤0. It is turned
into the conditions on the state derivative:
when,
The gradient descent method can ensure that E is always down until the arrival
of a local minimum (Rai et al., 2009). With the
Hopfield energy function in solving optimization problems, the first question
is to change the problem into the objective function and constraints and then
construct the energy function, calculate energy function of the parameters by
using the conditional formula, at last the result is the artificial neural network
connection weights.
The steps of the energy function: When Hopfield solve the optimization
calculation, the specific steps of the energy function design is:
• 
According to the request of the objective function write the
first energy function f (u) 
• 
According to constrained condition g (u) = 0, write down penalty function
to satisfy the constraints in the minimum time as the second energy function.
The optimal results can be achieved in the circuit with 
• 
Equation of state is calculated according to Energy function E 
• 
According to the relationship between conditions and parameters, it calculates
w_{ij} b_{i}, i = 1, 2,... n 
Network stability analysis: The stability of neural networks has a reliable
basis when Hopfield feedback neural network adopts the concept of energy function.
The socalled network stability is that the starting solution from the network
converge a certain equilibrium point (Wang et al.,
2006). Hopfield network energy function is different from the usual Lyapunov
function to determine system stability. The literature have made a lot of research
on how to improve the energy function. Consider the differential equation:
Ivan et al. (2002) described in Neural network
is xεR^{n}, f is smooth enough to guarantee the unique solution
in [0,+∞]. Suppose xεR^{n}, use x (t, x_{0}) to express
the solution of Eq. 16 from x_{0}εR (Liu
et al., 2010). If the network equilibrium is no empty set, the network
of rail lines are all converge to the equilibrium point of the collection, it
claims that the network Eq. 16 is stable.
Theorem 1: If there is a continuous function:
When ,
the network Eq. 16 necessary and sufficient condition for
the stability of the network of all solutions bounded.
Theorem 2: If the connection weight coefficient matrix, T is symmetric, the network:
From the Theorem 1, the equilibrium set of the network Eq. 17
is no empty and all the solutions converge to the equilibrium point of the set,
so the network is stable.
Theorem 3: If there is a positive diagonal element of the diagonal matrix α = diag (α_{1}, α_{2},... α_{n}), when αT is symmetric, so the network Eq. 17 is stable.
The network equilibrium and the energy function minimum point: The following theorem describes the equilibrium network energy function for the local minimum conditions.
Theorem 4: If the existence of all solutions in the network Eq. 16 are bounded and continuous function E,
When:
Suppose that x^{*} is a balanced state, x^{*} is a local minimum
necessary in energy function E, x^{*} has a sufficiently small neighborhood
D (x^{*}), x_{0}εD (x^{*}).
x (t, x_{0}) → x^{*}_{0} (t→+∞) E (x^{*}_{0})≥ E (x^{*})_{0} a balanced state of setting the network (16) is a local minimum of the energy function E. There exists the sequence x_{k}→x^{*}_{0} (k = 1, 2,... ) which produces E (x^{*}_{k})≥ E (x^{*}) and x (t, x_{k})→x^{*}_{k} (t→+∞), E (x^{*}_{k})≥ E (x^{*}). Because of the energy is downward, E (x^{*}_{k})≥ E (x^{*}) (k→+∞), there exists t_{k}→0 (k→+∞), x (t_{k}, x_{k})→x^{*} (k→+∞) and E (x (t_{k}, x_{k}))≥E (x^{*}). Obviously, x^{*} can not be the local minimum of the energy function E. For x_{0}εD (x^{*}), the network (5) is stable from Theorem 5, x (t, x_{0}) converge to the balanced state x'. When E (x (x_{0}, t)) is declining, t≥0, it is E (x_{0})≥E (x (t, x_{0}))≥E (x')≥E (x^{*}). From x_{0}εR^{n}, it is concluded than x^{*} is a minimum point of the energy function E.
Corollary 1: Set the conditions for the establishment of Theorem 4 and
then any of the local convergence equilibrium state in the network Eq.
16 is the local minimum energy function. When the neural network optimizes
the calculation, the most important function of the global energy is the minimum
point.
Theorem 5: If all solutions in the network Eq. 16 are bounded and continuous function there exists
Suppose x^{*} is a balanced state in the network Eq.
16. Energy function E is a global minimum necessary and sufficient condition
for the network, any one of the other equilibrium E (x')≥E (x).
SIMULATIONS
Experiment environment: We made the experiments from March 2010 to January
2011.We completed this project under the guidance of Professor Wang in control
theory laboratory (Li et al., 2005; Alzoubi
et al., 2002). There are some simulations to compare the performance
between traditional and opportunistic algorithm. According to the simulation
setting parameter, the experiment environment is based on a WSN cluster model.
With the method of Hopfield, it uses the distributed data fusion. The energy
consumption is mainly communication transmission. Suppose Hopfield neural network
state vector V = [v_{1}, v_{2},..., v_{n}]^{T}
is the output vector I= [I_{1}, I_{2},..., I_{n}]^{T}
is the network input vector. As the time went on, the evolution of the solution
in state space moves towards the direction of movement of energy E decreases.
The final network output V is the network's stable equilibrium point and minimum
points (Paradis and Han, 2007). According to the reference,
there are some problem in the wireless network. The problem is mapped to the
dynamic process of neural networks. Hopfield uses transposed matrix, N*N matrix
represent the node which is accessed with the clusterhead node, the data is
the integration of data is not sent back to the clusterhead node until traversing
N nodes. Hopfield energy function is defined as follows:
The dynamic equation of Hopfield network is:
Experimental steps: Hopfield network solve this problem using an algorithm
described as follows. Solving the local minimum and unstable problem, it should
be chosen large enough coefficient of A, B, D to ensure the effectiveness of
solution.
Step 1 
: 
Set its initial value and weight, t = 0, A = B = 1500 and
D = 1000, U_{0}= 0.02 
Step 2 
: 
Read d_{xy} (x, y) distance among the cluster nodes N 
Step 3 
: 
Neural network input U_{xi} (t) = U'_{0}+δ_{xi},
In (N1). N is the neuron number, δ_{xi} is the random value
in (1, +1) 
Step 4 
: 
Use dynamic Eq. 19 Calculate dU_{xi}/dt 
Step 5 
: 
According to the first order Euler method 
Step 6 
: 
Use sigmoid function to calculate V_{xi} (t) 
Step 7 
: 
According to step 3, computational energy function E. Check
the path of legality, judge whether the end of the number of iterations.
If the end of the termination, the program is over 
Step 8 
: 
It returns to step 4 
Step 9 
: 
Output the optimal path and energy function, path length, energy function
changes with time 
Experimental results: Hopfield network energy function Eq. 3, take A = B = 1500 and D = 1000, taking 20 nodes in a cluster, sampling time take Step = 0.004, after 1000 iterations, the energy minimum stable value E_{min} = 3.4788, the change process of the energy with iterations number is shown. Hopfield network converges quickly; it can quickly converge to the target solution. With the increase in the number of nodes, there will be problem in solving the local minimum and unstable, these are further improved. Compared with genetic algorithm and ant colony algorithm, Hopfield network has a lot of advantages in handling largescale network optimization. Hopfield network converge quickly and small processing delay from Table 1 and 2.
By using network simulation tool NS2 to simulate the algorithm, the mathematical
model is set up with MATLAB. There is simulation parameter in the following
Table 3. When wireless sensor networks is largescale, the
algorithm can efficiently obtain the optimum level solution and the optimal
data path which can meet the quality network services requirements such as less
hops, delay and a small received packet rate.
Table 1: 
The result of 1000 Iteration 

Table 2: 
The result of 500 Iteration 

Table 3: 
Simulation parameter 

The experiment uses the wireless sensor network topology which is shown in
Fig. 1, the network nodes has the same data repeatedly run
through the basic ant colony algorithm and adaptive ant colony algorithm for
optimal path program. It acquires the number of iterations level comparison
chart when two algorithms get the optimal solution.
Network lifetime analysis: Network lifetime is an important index to
analysis the energy efficiency of the algorithm. Network lifetime in clusters
stable stage is mainly that is sustained rounds from the network running to
the first node death. In order to identify the status of the sensor network,
it has to be checked if the sensor is fully sponsored by its neighbors. In this
study we will consider only the sensing range. The works described in Fig.
3 presents a simple method to determine a candidate by calculating the neighbor
number. When the stable cluster is longer, the network has better energy efficiency.
Under the conditions of NS2, the experiment gives lifecycle comparison chart
of three algorithms. From Fig. 3, the cluster stability of
Hopfield is about 30% longer than the extension of Ant Colony. It indicates
that Hopfield's energy efficiency is best under the same conditions. Hopfield’s
energy efficiency has more advantages than Ant Colony and Genetic algorithm.
As show from Fig. 3 and 4, the Hopfield
cluster in stable conditions is more stable than the cluster heterogeneous conditions.
It indicates that the algorithm has great influence on energy efficiency when
the initial energy of network nodes is different. In Fig. 3,
the ant colony algorithm needs some iteration to obtain the optimum level solution
and each time the fluctuation in the number of iterations requires being gentler.
In the same size of the network in the ant colony algorithm, it improves greatly
the time length for searching the optimal path, optimal path need to consume
less network resource which helps to ensure data transmission process, the less
network delay and the better data packet reception rate.

Fig. 3: 
The Simulation of the Network Lifetime 

Fig. 4: 
The Simulation of the Sensor Energy Consumption 
Energy efficient analysis: Network energy balanced consumption is an
importance index of the algorithm. The specific performance is the energy consumption
of each round and the death of the network which is the death rounds from the
first node to all death nodes. Obviously, the energy consumption of the network
round is smaller and more uniform, the network died shorter, the energy balanced
consumption of the network is better. According to the consumption of each algorithm,
the comparison chart of each round is shown in the Fig. 4
and then it gives the algorithm comparison chart of the death network. From
Fig. 4, each round of Hopfield’s energy consumption is
better than Ant colony and Genetic algorithm. No matter any condition, Hopfield
is significantly better than other algorithms. The network death of Ant colony
and the Hopfield is different, it indicates that the algorithm are thinning
much difference in the heterogeneous condition and Hopfield is wellmaintained
consistency, the death stage of Hopfield’s network is short which indicates
that almost all nodes in the network can die at the same time. It reflects that
the Hopfield has very good balanced energy and shows the energy consumption
of genetic algorithm is greater than Ant colony and Hopfield. In the Ant colony,
the other candidate nodes set up a time to enable them to send broadcast packets
in accordance with the order. Through this broadcast packet, it completes the
clusters establishment and intercluster communication. However, in genetic,
this step requires the completion of many additional control messages and requires
more energy. This process of Hopfield protocol cost low. Compared with the protocol
of Genetic and Hopfield, Hopfield can effectively extend the network lifetime
through the lower control overhead and multihop communication between clusters.
In this paper, the algorithm has better searching path efficiency and then it
reduce the possibility of local optimum in the path search process. The experiments
in wireless sensor network show that this algorithm on route optimization process
is highly efficient, reduces searching time of the data transmission path, better
ability to search for optimal solutions.
CONCLUSION
In the study, neuron describes WSN nodes, the wireless sensor networks are expressed by a neural model. It introduces the design and realization methods of the neuronal model and neural network model. Aiming to the difficulty to construct the energy function in neural network, this study comes up with the general approach, gives a strict criterion of the discriminate neural network energy function and discusses the problem of network energy equilibrium about function miniature point. The results provide a theoretical basis in the design and application of neural networks.
ACKNOWLEDGMENTS
This work is supported by two grants from the National Natural Science Foundation of China (No. 60773190, No. 60802002).