Subscribe Now Subscribe Today
Research Article
 

Energy Function Analysis and Optimized Computation Based on Hopfield Neural Network for Wireless Sensor Network



Lejiang Guo, Bingwen Wang, Wei Wang, Zhuo Liu and Chao Gao
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

Wireless sensor network is a complex wireless network (WSN) which has a large number of network nodes. Based on neural network theory and methods, this study uses neuron to describe the WSN node and constructs neural network model for WSN. The neural network model includes three aspects: WSN node neuron model, WSN node control model and WSN node connection model. In order to maximize the network life-cycle, this study analyzes Hopfield method and proposes the general design method and procedure of energy function. It discusses the relationship between network equilibrium and the minimum point of energy function. The result shows that the neural model of wireless sensor networks brings convenience for WSN and provides a certain theoretical foundation for the applications of the neural networks.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Lejiang Guo, Bingwen Wang, Wei Wang, Zhuo Liu and Chao Gao, 2011. Energy Function Analysis and Optimized Computation Based on Hopfield Neural Network for Wireless Sensor Network. Information Technology Journal, 10: 1208-1214.

DOI: 10.3923/itj.2011.1208.1214

URL: https://scialert.net/abstract/?doi=itj.2011.1208.1214
 
Received: February 20, 2011; Accepted: April 07, 2011; Published: May 13, 2011



INTRODUCTION

Wireless Sensor Network (WSN) is a collection of smart, autonomous sensor nodes which are equipped with heavily integrated sensing, processing and communication capabilities (Akyildiz et al., 2002). These nodes self organizes the network with multi-hop mode. Wireless Sensor Network (WSN) have an important practical value in the military, environmental monitoring, industrial control, intelligent home and urban transport, etc. It has become a hot research area in recent years (Chang and Tassiulas, 2004). These sensor nodes are often deployed in areas that are unreachable by humans and thus the life time of the network which they create gets limited to the battery of these sensor nodes. Due to the energy, computing power, storage capacity and communication capacity, WSN is the complex system. Energy consumption is the most important factor that determines sensor node lifetime (Dai et al., 2009). The optimization of wireless sensor network lifetime targets not only the reduction of energy consumption of a single sensor node but also the extension of the entire network lifetime (Guo et al., 2010). In wireless sensor network, all nodes are energy constrained and designing an energy efficient routing is very important. It is difficult to use a unified model to describe the wireless sensor network node. However, neural network model can easily describe the complex system. This study describes WSN nodes and network model based on neural network model.

THE WSN MODEL

WSN node neuron model: WSN node neuron model is shown in Fig. 1. q is the node data fusion; S1, S2,... Sn for the sensor nodes to collect information; ω1, ω2,... ωn for the weight value; θ for the threshold. Relationship between input and output nodes in accordance with the following formula:

(1)

WSN node control model: Similar to the neural network control model, WSN node control model based on neural model is shown in Fig. 2.

Fig. 1: WSN neuron model

Fig. 2: WSN Neuron control model

Weighted adder:

(2)

The input of the sensor nodes is expressed as Wi:

(3)

Among the formula (3), Ai, Bi is the connection matrix, then:

(4)

Linear dynamic functions:

(5)

where, X (s), V (s), H (s) is the Laplace transform from Xi (t), vi (t), h (t). h (t) is linear dynamic function of the impulse response.

Static nonlinear function:

(6)

WSN node connection model Adjih et al. (2005): There is some wireless sensor network model with neural network. From the Eq. 2 and 3, suppose neurons are static, that is H (s) = 1.The neuron can be expressed as:

(7)

X is N-dimensional vector. g(.) is a nonlinear function. A, B is the connection matrix. The WSN connectivity for three neural networks can be expressed as:

(8)

The superscript expresses the level.

First level is general node level:

(9)

Second level is the sink node layer:

(10)

Third level is the user node layer:

(11)

HOPFIELD NETWORK ENERGY FUNCTION

Sensor network node energy is extremely limited. In order to prolong the life of the network and the whole system, all the information processing strategies must reduce node energy consumption as far as possible. (Paradis and Han, 2007). Hopfield neural network is an artificial neural network model to optimize computing, associative memory, pattern recognition and image restoration, etc. (Chessa and Santi, 2001). Hopfield energy function is a reflection of the overall state of multidimensional neuronal scalar function (Rai et al., 2009). It can form a simple circuit of artificial neural networks which adopts a parallel computing interconnected mechanism . Hopfield is built on energy with the same Lyapunov function. Hopfield considers that the internal stored energy is gradually decreased with time increases in the system movement process. When the movement to equilibrium, the energy of the system runs out or becomes miniature (Dai and Wu, 2004). The system will naturally balance and go to stabile state. Therefore, the system can solve the stability problem if we can find a complete description of the process of energy function. For continuous feedback network circuit, the state equations is:

(12)

When the system reaches steady output, Hopfield energy function is defined as:

(13)

The general method of energy function: Suppose the optimization objective function is !f (u), uεRn is the state of artificial neural network which is also the objective function of the variable (Intanagonwiwat et al., 2003). Optimizing constrained conditions is g (u) = 0 (Raei et al., 2009). Optimization problem is to meet the constraint conditions and to minimum objective function. The equivalent minimum energy function E is expressed as:

(14)

where, |gi (u)| is penalty function. When |gi (u)| = 0 is not satisfied, the value Σ|gi (u)| is always greater than zero. According to Hopfield energy function and gradient descent, E is limited in the negative direction, |E|<Emax, dE/dt≤0 (Luo et al., 2006). System can always reach the final minimum E and dE/dt = 0 which is the stability point dui/dt = 0. When solving optimization problems, E is often a function of the status u, so dE/dt≤0. It is turned into the conditions on the state derivative:

(15)

when,

The gradient descent method can ensure that E is always down until the arrival of a local minimum (Rai et al., 2009). With the Hopfield energy function in solving optimization problems, the first question is to change the problem into the objective function and constraints and then construct the energy function, calculate energy function of the parameters by using the conditional formula, at last the result is the artificial neural network connection weights.

The steps of the energy function: When Hopfield solve the optimization calculation, the specific steps of the energy function design is:

According to the request of the objective function write the first energy function f (u)
According to constrained condition |g (u)| = 0, write down penalty function to satisfy the constraints in the minimum time as the second energy function. The optimal results can be achieved in the circuit with
Equation of state is calculated according to Energy function E
According to the relationship between conditions and parameters, it calculates wij bi, i = 1, 2,... n

Network stability analysis: The stability of neural networks has a reliable basis when Hopfield feedback neural network adopts the concept of energy function. The so-called network stability is that the starting solution from the network converge a certain equilibrium point (Wang et al., 2006). Hopfield network energy function is different from the usual Lyapunov function to determine system stability. The literature have made a lot of research on how to improve the energy function. Consider the differential equation:

(16)

Ivan et al. (2002) described in Neural network is xεRn, f is smooth enough to guarantee the unique solution in [0,+∞]. Suppose xεRn, use x (t, x0) to express the solution of Eq. 16 from x0εR (Liu et al., 2010). If the network equilibrium is no empty set, the network of rail lines are all converge to the equilibrium point of the collection, it claims that the network Eq. 16 is stable.

Theorem 1: If there is a continuous function:

When , the network Eq. 16 necessary and sufficient condition for the stability of the network of all solutions bounded.

Theorem 2: If the connection weight coefficient matrix, T is symmetric, the network:

(17)

From the Theorem 1, the equilibrium set of the network Eq. 17 is no empty and all the solutions converge to the equilibrium point of the set, so the network is stable.

Theorem 3: If there is a positive diagonal element of the diagonal matrix α = diag (α1, α2,... αn), when αT is symmetric, so the network Eq. 17 is stable.

The network equilibrium and the energy function minimum point: The following theorem describes the equilibrium network energy function for the local minimum conditions.

Theorem 4: If the existence of all solutions in the network Eq. 16 are bounded and continuous function E,

When:

Suppose that x* is a balanced state, x* is a local minimum necessary in energy function E, x* has a sufficiently small neighborhood D (x*), x0εD (x*).

x (t, x0) → x*0 (t→+∞) E (x*0)≥ E (x*)0 a balanced state of setting the network (16) is a local minimum of the energy function E. There exists the sequence xk→x*0 (k = 1, 2,... ) which produces E (x*k)≥ E (x*) and x (t, xk)→x*k (t→+∞), E (x*k)≥ E (x*). Because of the energy is downward, E (x*k)≥ E (x*) (k→+∞), there exists tk→0 (k→+∞), x (tk, xk)→x* (k→+∞) and E (x (tk, xk))≥E (x*). Obviously, x* can not be the local minimum of the energy function E. For x0εD (x*), the network (5) is stable from Theorem 5, x (t, x0) converge to the balanced state x'. When E (x (x0, t)) is declining, t≥0, it is E (x0)≥E (x (t, x0))≥E (x')≥E (x*). From x0εRn, it is concluded than x* is a minimum point of the energy function E.

Corollary 1: Set the conditions for the establishment of Theorem 4 and then any of the local convergence equilibrium state in the network Eq. 16 is the local minimum energy function. When the neural network optimizes the calculation, the most important function of the global energy is the minimum point.

Theorem 5: If all solutions in the network Eq. 16 are bounded and continuous function there exists

Suppose x* is a balanced state in the network Eq. 16. Energy function E is a global minimum necessary and sufficient condition for the network, any one of the other equilibrium E (x')≥E (x).

SIMULATIONS

Experiment environment: We made the experiments from March 2010 to January 2011.We completed this project under the guidance of Professor Wang in control theory laboratory (Li et al., 2005; Alzoubi et al., 2002). There are some simulations to compare the performance between traditional and opportunistic algorithm. According to the simulation setting parameter, the experiment environment is based on a WSN cluster model. With the method of Hopfield, it uses the distributed data fusion. The energy consumption is mainly communication transmission. Suppose Hopfield neural network state vector V = [v1, v2,..., vn]T is the output vector I= [I1, I2,..., In]T is the network input vector. As the time went on, the evolution of the solution in state space moves towards the direction of movement of energy E decreases. The final network output V is the network's stable equilibrium point and minimum points (Paradis and Han, 2007). According to the reference, there are some problem in the wireless network. The problem is mapped to the dynamic process of neural networks. Hopfield uses transposed matrix, N*N matrix represent the node which is accessed with the cluster-head node, the data is the integration of data is not sent back to the cluster-head node until traversing N nodes. Hopfield energy function is defined as follows:

(18)

The dynamic equation of Hopfield network is:

(19)

Experimental steps: Hopfield network solve this problem using an algorithm described as follows. Solving the local minimum and unstable problem, it should be chosen large enough coefficient of A, B, D to ensure the effectiveness of solution.

Step 1 : Set its initial value and weight, t = 0, A = B = 1500 and D = 1000, U0= 0.02
Step 2 : Read dxy (x, y) distance among the cluster nodes N
Step 3 : Neural network input Uxi (t) = U'0xi, In (N-1). N is the neuron number, δxi is the random value in (-1, +1)
Step 4 : Use dynamic Eq. 19 Calculate dUxi/dt
Step 5 : According to the first order Euler method


Step 6 : Use sigmoid function to calculate Vxi (t)


Step 7 : According to step 3, computational energy function E. Check the path of legality, judge whether the end of the number of iterations. If the end of the termination, the program is over
Step 8 : It returns to step 4
Step 9 : Output the optimal path and energy function, path length, energy function changes with time

Experimental results: Hopfield network energy function Eq. 3, take A = B = 1500 and D = 1000, taking 20 nodes in a cluster, sampling time take Step = 0.004, after 1000 iterations, the energy minimum stable value Emin = 3.4788, the change process of the energy with iterations number is shown. Hopfield network converges quickly; it can quickly converge to the target solution. With the increase in the number of nodes, there will be problem in solving the local minimum and unstable, these are further improved. Compared with genetic algorithm and ant colony algorithm, Hopfield network has a lot of advantages in handling large-scale network optimization. Hopfield network converge quickly and small processing delay from Table 1 and 2.

By using network simulation tool NS2 to simulate the algorithm, the mathematical model is set up with MATLAB. There is simulation parameter in the following Table 3. When wireless sensor networks is large-scale, the algorithm can efficiently obtain the optimum level solution and the optimal data path which can meet the quality network services requirements such as less hops, delay and a small received packet rate.

Table 1: The result of 1000 Iteration

Table 2: The result of 500 Iteration

Table 3: Simulation parameter

The experiment uses the wireless sensor network topology which is shown in Fig. 1, the network nodes has the same data repeatedly run through the basic ant colony algorithm and adaptive ant colony algorithm for optimal path program. It acquires the number of iterations level comparison chart when two algorithms get the optimal solution.

Network lifetime analysis: Network lifetime is an important index to analysis the energy efficiency of the algorithm. Network lifetime in clusters stable stage is mainly that is sustained rounds from the network running to the first node death. In order to identify the status of the sensor network, it has to be checked if the sensor is fully sponsored by its neighbors. In this study we will consider only the sensing range. The works described in Fig. 3 presents a simple method to determine a candidate by calculating the neighbor number. When the stable cluster is longer, the network has better energy efficiency. Under the conditions of NS2, the experiment gives life-cycle comparison chart of three algorithms. From Fig. 3, the cluster stability of Hopfield is about 30% longer than the extension of Ant Colony. It indicates that Hopfield's energy efficiency is best under the same conditions. Hopfield’s energy efficiency has more advantages than Ant Colony and Genetic algorithm. As show from Fig. 3 and 4, the Hopfield cluster in stable conditions is more stable than the cluster heterogeneous conditions. It indicates that the algorithm has great influence on energy efficiency when the initial energy of network nodes is different. In Fig. 3, the ant colony algorithm needs some iteration to obtain the optimum level solution and each time the fluctuation in the number of iterations requires being gentler. In the same size of the network in the ant colony algorithm, it improves greatly the time length for searching the optimal path, optimal path need to consume less network resource which helps to ensure data transmission process, the less network delay and the better data packet reception rate.

Fig. 3: The Simulation of the Network Lifetime

Fig. 4: The Simulation of the Sensor Energy Consumption

Energy efficient analysis: Network energy balanced consumption is an importance index of the algorithm. The specific performance is the energy consumption of each round and the death of the network which is the death rounds from the first node to all death nodes. Obviously, the energy consumption of the network round is smaller and more uniform, the network died shorter, the energy balanced consumption of the network is better. According to the consumption of each algorithm, the comparison chart of each round is shown in the Fig. 4 and then it gives the algorithm comparison chart of the death network. From Fig. 4, each round of Hopfield’s energy consumption is better than Ant colony and Genetic algorithm. No matter any condition, Hopfield is significantly better than other algorithms. The network death of Ant colony and the Hopfield is different, it indicates that the algorithm are thinning much difference in the heterogeneous condition and Hopfield is well-maintained consistency, the death stage of Hopfield’s network is short which indicates that almost all nodes in the network can die at the same time. It reflects that the Hopfield has very good balanced energy and shows the energy consumption of genetic algorithm is greater than Ant colony and Hopfield. In the Ant colony, the other candidate nodes set up a time to enable them to send broadcast packets in accordance with the order. Through this broadcast packet, it completes the clusters establishment and inter-cluster communication. However, in genetic, this step requires the completion of many additional control messages and requires more energy. This process of Hopfield protocol cost low. Compared with the protocol of Genetic and Hopfield, Hopfield can effectively extend the network lifetime through the lower control overhead and multi-hop communication between clusters. In this paper, the algorithm has better searching path efficiency and then it reduce the possibility of local optimum in the path search process. The experiments in wireless sensor network show that this algorithm on route optimization process is highly efficient, reduces searching time of the data transmission path, better ability to search for optimal solutions.

CONCLUSION

In the study, neuron describes WSN nodes, the wireless sensor networks are expressed by a neural model. It introduces the design and realization methods of the neuronal model and neural network model. Aiming to the difficulty to construct the energy function in neural network, this study comes up with the general approach, gives a strict criterion of the discriminate neural network energy function and discusses the problem of network energy equilibrium about function miniature point. The results provide a theoretical basis in the design and application of neural networks.

ACKNOWLEDGMENTS

This work is supported by two grants from the National Natural Science Foundation of China (No. 60773190, No. 60802002).

REFERENCES
1:  Akyildiz, I.F., W. Su, Y. Sankarasubramaniam and E. Cayirci, 2002. Wireless sensor networks: A survey. Comput. Networks, 38: 393-422.
CrossRef  |  Direct Link  |  

2:  Adjih, C., P. Jacquet and L. Viennot, 2005. Computing connected dominated sets with multipoint relays. Ad Hoc Sen. Networks, 1: 27-39.
Direct Link  |  

3:  Chessa, S. and P. Santi, 2001. Comparison-based system-level fault diagnosis in ad hoc networks. Proceedings of the 20th IEEE Symposium on Reliable Distributed Systems, Oct. 28-31, New Orleans, Louisiana, pp: 257-266.

4:  Dai, F. and J. Wu, 2004. An extended localized algorithms for connected dominating set formation in ad hoc wireless networks. IEEE Trans. Parallel Distributed Syst., 15: 908-920.
CrossRef  |  

5:  Raei, H., M.A. Fathi, A. Akhlaghi and B. Ahmadipoor, 2009. A new distributed algorithm for virtual backbone in wireless sensor networks with different transmission ranges. Proceeding of the IEEE/ACS International Conference on Computer Systems and Applications, May 10-13, USA., pp: 983-988.

6:  Intanagonwiwat, C., R. Govindan and D. Estrin, J. Heidemann and F. Silva, 2003. Directed diffusion for wireless sensor networking. IEEE/ACM Trans. Network, 11: 2-16.
CrossRef  |  

7:  Alzoubi, K., P.J. Wan and O. Frieder, 2002. New distributed algorithm for connected dominating set in wireless ad hoc networks. Proceedings of the 35th Annual Hawaii International Conference on System Sciences, Jan. 7-10, Washington, DC., USA., pp: 3849-3855.

8:  Li, Y., M.T. Thai, F. Wang, C.W. Yi, P.J. Wang and D.Z. Du, 2005. On greedy construction of connected dominating sets in wireless networks. Wireless Commun. Mobile Comput., 5: 927-932.
CrossRef  |  Direct Link  |  

9:  Guo, L., B. Wang, Z. Liu and W. Wang, 2010. An energy equilibrium routing algorithm based on cluster-head prediction for wireless sensor networks. Inform. Technol. J., 9: 1403-1408.
CrossRef  |  Direct Link  |  

10:  Luo, X., M. Dong and Y. Huang, 2006. On distributed fault-tolerant detection in wireless sensor networks. IEEE Trans. Comput., 55: 58-70.
CrossRef  |  

11:  Rai, M., S. Verma and S. Tapaswi, 2009. A heuristic for minimum connected dominating set with local repair for wireless sensor networks. Proceedings of the 8th International Conference on Networks, March 1-6, Washington, DC., USA., pp: 106-111.

12:  Paradis, L. and Q. Han, 2007. A survey of fault management in wireless sensor networks. J. Network Syst. Manage., 15: 171-190.
CrossRef  |  

13:  Stojmenovic, I., M. Seddigh and J. Zunic, 2002. Dominating sets and neighbor elimination-based broadcasting algorithms in wireless networks. IEEE Trans. Parallel Distributed Syst., 13: 14-25.
CrossRef  |  

14:  Wang, Y., W. Wang and X.Y. Li, 2006. Efficient distributed low-cost backbone formation for wireless networks. IEEE Trans. Parallel Distributed Syst., 17: 681-693.
CrossRef  |  

15:  Liu, Z., B. Wang and L. Guo, 2010. A survey on connected dominating set construction algorithm for wireless sensor networks. Inform. Technol. J., 9: 1081-1092.
CrossRef  |  Direct Link  |  

16:  Dai, Z., Z. Li, B. Wang and Q. Tang, 2009. An energy-aware cluster-based routing protocol for wireless sensor and actor network. Inform. Technol. J., 8: 1044-1048.
CrossRef  |  Direct Link  |  

17:  Chang, J. and L. Tassiulas, 2004. Maximum lifetime routing in wireless sensor networks. IEEE/ACM Trans. Network., 12: 609-619.
CrossRef  |  Direct Link  |  

©  2021 Science Alert. All Rights Reserved