ABSTRACT
In this study, the aim of the study is to present a new method to solve interval quadratic programming problem with box set constraints by using a projection neural network model. Based on the Saddle point theorem, the equilibrium point of the proposed neural network is proved to be the optimal solution of the interval quadratic optimization problem. By using fixed point theorem, the proof of the existence and uniqueness of equilibrium point for the proposed neural network is given. By constructing suitable Lyapunov functions, the asymptotic properties of the neural network are analyzed and a sufficient condition to ensure the global exponential stability for the unique equilibrium point, solution feasibility and solution optimality is presented. The transient behavior of the neural network is simulated and the validity of the result obtained is verified with an illustrative example.
PDF Abstract XML References Citation
How to cite this article
DOI: 10.3923/itj.2010.1615.1621
URL: https://scialert.net/abstract/?doi=itj.2010.1615.1621
INTRODUCTION
It is well known that the quadratic optimization:
![]() | (1) |
where, Q ε Rnxn, c ε Rn, Ω is a convex set, arise in a wide variety of scientific and engineering applications including regression analysis, image and signal progressing, parameter estimation, filter design and robust control, etc. (Bazaraa et al.,1993). When Q is a definite matrix, the problem (1) is said to be the strict convex quadratic programming . When Q is a semi-definite matrix, the problem (1) is said to be the degenerate convex quadratic programming. In many cases, the matrix Q = (qij)nxn in problem (1) is not precisely known, but can only be enclosed in intervals, i.e.,
![]() |
Such quadratic program with interval data is named as interval quadratic program usually. In many real-time applications, the problem (1) has a time-varying nature, it has to be solved in real time. The main advantage of neural network approach to optimization is that the nature of the dynamic solution procedure is inherently parallel and distributed (Bertsekas, 1989). Therefore, the neural network approach can solve optimization problems in running time at the orders of magnitude much faster than the most popular optimization algorithms executed on digital computers (Cichocki and Unbehauer, 1993; Leung et al., 2003; Al-Bastaki, 2006; Abdallah and Al-Thamier, 2004). Recently, there are some project neural network approaches for solving quadratic optimization problem (Xia, 1996; Xia and Wang, 2004; Yang and Cao, 2006a; Gao and Liao, 2006; Gao et al., 2004; Ghasabi-Oskoei et al., 2007). Kennedy and Chua presented a primal network for solving the strict convex quadratic programming (Kennedy and Chua, 1998). The network contains a finite penalty parameter, so it converges an approximate solution only. To overcome the penalty parameter, several primal projection neural network proposed for solving the strict convex quadratic program and it dual and analyzed the global asymptotic stability of the proposed neural networks when the constraint set Ω is a box set (Xia, 1996; Xia and Wang, 2004). Subsequently, Present study presented a recurrent projection neural network for solving the strict convex quadratic program and related linear piecewise equation and gave some conditions of the exponential convergence of the proposed networks (Xia et al., 2004; Xia and Feng, 2005). Yang and Cao (2006b) presented a delayed projection neural network for solving quadratic programming problem and analyzed the global asymptotic stability and exponential stability of the proposed neural networks when the constraint set Ω is a unbounded box set (Yang and Cao, 2006a). In order to solve the degenerate convex quadratic programming problem (Tao et al., 1998) and (Xue and Bian, 2007) proposed two projection neural networks to solve this problem and proved that the equilibrium point of the proposed neural networks was equivalent to the KT point of problem (1). Particularly, the neural network proposed by Xue and Bian (2009) was shown to have complete convergence and finite-time convergence and the nonsingular part of the output trajectory respect to Q has an exponentially convergent rate. Hu and Wang (2007) designed a general projection neural network for solving monotone linear variational inequalities and extended linear-quadratic programming problems and proved that the proposed network was exponentially convergent when the constraint set Ω is a polyhedral set (Hu, 2007; Hu and Wang, 2007).
In order to solve the interval quadratic optimization problem, Ding and Huang presented a new class of interval projection neural networks and proved the equilibrium point of this neural network is equivalent to the KT point of a class of interval quadratic program (Ding and Huang, 2008). Furthermore, some sufficient conditions to ensure the existence and global exponential stability for the unique equilibrium point of interval projection neural networks are given. To the best of the authors knowledge, this paper (Ding and Huang, 2008) is first to discuss solving the interval quadratic optimization problem by a projection neural network. However, the interval quadratic program discussed by Ding and Huang is only a quadratic program without constraints, thus have great limitations in practice applications. It is well known that the quadratic program with constraints is more popular.
A PROJECT NEURAL NETWORK MODEL
Consider the following interval quadratic programming problem:
![]() | (2) |
where,
![]() |
is a definite diagonal matrix.
![]() |
The Lagrangian function of the problem (2) is:
![]() |
where, u ε Rn is referred to as the Lagrange multiplier and
![]() |
According to the well-known Saddle point theorem (Bazaraa et al., 1993) , we see that x* is an optimal solution of (2) if and only if there exist u* and η* such that
![]() |
that is:
![]() | (3) |
From the first inequality in system 3, we obtain (u-u*)T (Dx*-η*)≥0, ∀u ε Rn. Hence, Dx* = η*. From the second inequality in system 3, we can get:
![]() |
If there exists such that
then
which is contradictive when η = η*.Thus, for any x ε Rn we have f(x*)-f(x)≤0 and (u*)T (η-η*) $0 ∀ η εX. From the project formulation (Kinderlehrer and Stampcchia, 1980), it is obviously seen that the above inequality can be equivalently represented as η* = PX(η*-u*), where PX(u) = [PX(u1), PX(u2), ..., PX(un)]T is a project function and for I = 1,2,...n.
![]() |
On the other hand, f(x*)≤f(x), ∀xεRn, this implies that:
∇F(x*) = Qx* + c-Du* = 0 |
Thus, x* is an optimal solution of system 2 if and only if there exist u* and η* such that
![]() | (4) |
Substituting η = Dx into the equation η = PX (η-u), we have Dx = PX (Dx-u). Then x* is an optimal solution of system 2 if and only if there exists u* such that (x*-u*)
![]() |
Now let us build a project equation: substituting u = D-1 (Qx+c) into the equation Dx = PX (Dx-u), we have Dx = PX (Dx-D-1 Qx-D-1c)
![]() |
By the above discussion, we can obtain.
Proposition 1: Let x* be a solution of the project equation
![]() | (5) |
then x* is an optimal solution of system 2.
In the following, we propose a neural network, which is said to be the interval projection neural network, for solving Eq. 2 and 5, whose dynamical equation is defined as follows:
![]() | (6) |
The neural networks Eq. 6 can be written as:
![]() | |
Fig. 1: | Architecture of the proposed neural network in system 6 |
![]() | (7) |
Figure 1 shows the architecture of the neural network Eq. 6, where, M = (mij)nxn = D-D-1Q, C = D-1c, D = (dij)nxn.
Definition 1: The point x* is said to be an equilibrium point of interval projection neural networks (6), if x* satisfies
0 = PX (Dx*-D-1Qx*-D-1c)-Dx* |
By Proposition 1 and Definition 2, we have
Theorem 1: The point x* is an equilibrium point of the interval projection neural networks (6) if and only if it is an optimal solution of the interval quadratic programming problem (2).
Definition 2: The equilibrium point x* of the neural network (6) is said to be globally exponentially stable, if the trajectory x(t) of the system (6) with the initial value x0 satisfies
![]() |
where, β>0 is a constant independent of the initial value x0 and c0>0 is a constant dependent on the initial value x0 · ||·|| denotes the 1-norm of Rn, i.e.,
![]() |
Lemma 1: Baiocchi and Capelo (1984) Let Ω⊂Rn is a closed convex set, then
![]() |
and
![]() |
where PΩ(u) is a project function on Ω, given by PΩ(u) = argmin yεΩ||u-y||.
EXISTENCE AND UNIQUENESS OF THE EQUILIBRIUM POINT
Here, we will prove the existence and uniqueness of the equilibrium point for the interval projection neural network (6).
Theorem 2: Assume that (H1):
![]() |
is satisfied, where
![]() |
Then there exists a unique equilibrium point for the neural network (6).
Proof: Let By Definition 1, it is obvious that the neural network (6) has a unique equilibrium point if and only if T has a unique fixed point in Rn. In the following, by using fixed point theorem, we prove that T has a unique fixed point in Rn. For any x, y ε Rn, by Lemma 1 and the assumption H1, we can obtain
![]() | (8) |
![]() |
![]() |
where
![]() |
By the assumption H1,
![]() |
This implies
![]() |
system 8 shows that T is a contractive mapping, hence T has a unique fixed point. This completes the proof.
Proposition 2: If the assumptions H1 holds, then for any x0 ε Rn, there exists a solution with the initial value x(0) for the neural network (6).
Proof: Let F = D(T-I), where I be an identity mapping, then F(X) = PX(Dx-D-1Qx-D-1c)-Dx. By (7), we have:
![]() | (9) |
Equation 9 means that the mapping F is globally Lipschitz. Hence for any x0 ε Rn, there exists a solution with the initial value x(0) = x0 for the neural network Eq. 6. This completes the proof.
Proposition 2 shows the existence of the solution for the neural network Eq. 6.
GLOBAL EXPONENTIAL STABILITY OF THE EQUILIBRIUM POINT
Here, we establish a sufficient condition ensuring global exponential stability of the equilibrium point for the neural networks Eq. 6 under the assumption H1.
Theorem 3: If the assumption H1 is satisfied, then the equilibrium point of the neural network (6) is globally exponentially stable.
Proof: By Theorem 2, the neural network (6) has a unique equilibrium point, this equilibrium point is denoted by x*.
Consider Lyapunov function
![]() |
Calculate the derivative of V(t) along the solution x(t) of the neural network (6). When t>t0, we have
![]() |
Noting Dx* = PX(Dx*-D-1Qx*-D-1c), by Lemma 1
![]() |
Hence,
![]() | (10) |
where
![]() |
By the assumption H1,
![]() |
Hence,
![]() |
Let
![]() |
then R*>0. Equation 10 can be rewritten as
![]() |
It follows easily
![]() |
This shows that the equilibrium point x* of the neural network Eq. 6 is globally exponentially stable. This completes the proof.
AN ILLUSTRATE EXAMPLE
Consider the interval quadratic programming problem (2) defined by D = diag (1,2,2), d = (1,2,3)T, h = (2,3,4)T, cT = (1,1,1),
![]() |
The optimal solution of this quadratic program is (1,1,1.5) under
![]() |
It is easy to check
![]() |
and
![]() |
![]() | |
Fig. 2: | Convergence of the state trajectory of the neural network (11) with random initial value (-0.5, 0.6, -0.8), ![]() |
![]() | |
Fig. 3: | Convergence of the state trajectory of the neural network (11) with random initial value ![]() |
The assumption H1 holds. By Theorem 3, the neural network (6) has a unique equilibrium point which is globally exponentially stable and the unique equilibrium point is the optimal solution of this quadratic programming problem.
When
![]() |
the proposed neural network is
![]() | (11) |
for solving this interval quadratic programming problem.
In the case of , Fig. 2 reveals that the projection neural network Eq. 11 with random initial value (-0.5, 0.6, -0.8) has a unique equilibrium point (1,1,1.5) which is globally exponentially stable. In the case of
, Fig. 3 reveals that the projection neural network Eq. 11 with random initial value (0.8, -0.6, 0.3) has a unique equilibrium point (1,1,1.5) which is globally exponentially stable. These are in accordance with the conclusion of Theorem 2 and 3.
CONCLUSION
In this study, we have developed a new projection neural network for solving interval quadratic programming problem, the equilibrium point of the proposed neural network is equivalent to the solution of interval quadratic programming problem. We have investigated the existence, uniqueness and global exponential stability of the equilibrium point for the neural network. A condition is derived which ensure the existence, uniqueness and global exponential stability of the equilibrium point. The results obtained in this study are valuable in solving interval quadratic programming problem in engineering.
ACKNOWLEDGMENTS
The authors are very grateful to Editor and Reviewers for their comments and constructive suggestions which help to enrich the content and improve the presentation of this study. The study was partially supported by the Hebei Province Education Foundation of China (No. 2009157).
REFERENCES
- Kennedy, M.P. and L.O. Chua, 1998. Neural networks for nonlinear programming. IEEE Trans. Circuits Syst., 35: 554-562.
CrossRef - Xia, Y., 1996. A new neural network for solving linear and quadratic programming problems. IEEE Trans. Neural Network, 7: 1544-1548.
Direct Link - Xia, Y., G. Feng and J. Wang, 2004. A recurrent neural network with exponential convergence for solving convex quadratic program and related linear piecewise equation. Neural Networks, 17: 1003-1015.
Direct Link - Xia, Y. and G. Feng, 2005. An improved network for convex quadratic optimization with application to real-time beamforming. Neurocomputing, 64: 359-374.
Direct Link - Yang, Y. and J. Cao, 2006. A delayed network method for solving convex optimization problems. Int. J. Neural Syst., 16: 295-303.
Direct Link - Gao, X. and L. Liao, 2006. A novel neural network for a class of convex quadratic minimax problems. Neural Comput., 18: 1818-1846.
Direct Link - Leung, Y., K.Z. Chen and X.B. Gao, 2003. A high-performance feedback neural network for solving convex nonlinear programming problems. IEEE Trans. Neural Networks, 14: 1469-1477.
Direct Link - Ghasabi-Oskoei, H., A. Malek and A. Ahmadi, 2007. Novel artificial neural network with simulation aspects for solving linear and quadratic programming problems. Comput. Math. Appl., 53: 1439-1454.
Direct Link - Xue, X. and W. Bian, 2007. A project neural network for solving degenerate convex quadratic program. Neurocomputing, 70: 2449-2459.
Direct Link - Xue, X. and W. Bian, 2009. A project neural network for solving degenerate quadratic minimax problem with linear constraints. Neurocomputing, 72: 1826-1838.
CrossRef - Ding, K. and N.J. Huang, 2008. A new class of interval projection neural networks for solving interval quadratic program. Chaos Solitons Fractals, 35: 718-725.
CrossRef - Al-Bastaki, Y.A.L., 2006. GIS image compression and restoration: A neural network approach. Inform. Technol. J., 5: 88-93.
CrossRefDirect Link - Abdallah, J. and A. Al-Thamier. 2004. The economic-environmental neural network model for electrical power dispatching. J. Applied Sci., 4: 340-343.
CrossRefDirect Link