INTRODUCTION
Quadratic programming problem (QPP) is a special class of Nonlinear Programming
Problems (NLPPs) where the objective function is quadratic and the constraints
are linear. QPP has widespread applications in engineering (Petersen
and Bodson, 2006) (e.g., electrical engineering), economics (Schwarz,
2006) and practical problems in sciences. Also often approximate nonlinear
programming problems via a series of quadratic programming (Zhu
and Zhang, 2004). On the other hand, a broad class of difficult combinatorial
problems can be formulated as nonconvex quadratic minimization problem, for
example: integer programming, quadratic 01 programming, quadratic assignment
problems, bilinear programming, linear complementary problems and max/min problems
(Pardalos and Rosen, 1987). Several methods (e.g., Wolfe
and Bale), are available in the literature for solving QPPs.
Also we can apply KurushKuhnTucker (KKT) optimality conditions (Bazaraa
et al., 1993) for solving a QPP. The algorithms for quadratic programming
have been developed in the past few decades, such as active set method, interior
point method, dual method, etc. (Zhu and Zhang, 2004)
Nonconvex quadratic programming problem has great importance both from the mathematical
and application viewpoints. During the last decade, several authors have shown
that the general quadratic programming problem is an NPhard problem in global
optimization (Murty and Kabadi, 1987). In this study
we try to propose bounds for a general QPP using an optimization problem with
interval valued objective function.
PROPOSED METHOD
Consider the following general QPP:
subject to:
where, ε and I are finite sets of indices, c is a n vector and H is an NxN symmetric matrix, {a_{i}}, i∈I∪ε are vectors with n elements. Let H be a positive definite matrix then the objective function will be a convex function and we can apply nonlinear programming methods (which are convergent to the unique solution). In this case (1) is said to be a convex QP. When the hessian matrix of the objective function is not positive semi definite, however, finding a global minimum for the underlying QPP becomes a difficult task. In fact, nonconvex QPs, in which H is an indefinite matrix, can be more challenging, since they can have several stationary points and local minima. Here we use a theorem and find an interval which involves the optimal value of the objective function which can be useful. Our approximation is useful for complicated and large scale problems.
Basics of our new method are to estimate the quotient:
for x ≠ 0 from both sides. Consider the following theorem:
Theorem 1: If A is a symmetric matrix of order n with eigenvalues λ_{1}≤λ_{2}≤...λ_{n}, then:
proof (Fiedler et al., 2006).
Now we apply this theorem to find the bounds of optimal solution of problem (1). Suppose that, λ_{1}≤λ_{2}≤...λ_{n}, are the eigenvalues of the matrix H, then for every nonzero vector x we have for every nonzero vector x. It is simple to see that In another word we found a range for the objective function of problem (1), and so we have:
using Eq. 3 we have:
For every nonzero vector x. It is simple to see that:
In another word we found a range for the objective function of problem (1) and so we have:
where, I is the identity matrix of order n. So it is enough to solve the following QPPs:
subject to:
And:
Subject to:
which it is very simple, because I is a positive semi definite matrix and symmetric, so we can apply any nonlinear programming techniques.
Remark 1: Note that to compute the smallest and greatest eigenvalues we can apply existing algorithms in numerical linear algebra (e.g., Power method).
NUMERICAL RESULTS
Here, to show the applicability of this approximation, we solve two examples which the second one is a nonconvex QPP. To solve numerical examples with proposed method, we used MATLAB7 optimization toolbox.
Here, to show the applicability of this approximation, we solve two examples which the second one is a nonconvex QPP. To solve numerical examples with proposed method, we used MATLAB7 optimization toolbox.
Example 1: consider the following QPP:
subject to:
According to Eq. 1, here:
Optimal solution is x* = (1.6923, 1.3077, 1.3077)^{T} with objective value Z = 7.7692. Here we solved corresponding problems 7 and 8 for this example. The optimal value for Lower problem is x^{*}_{L} = (2.5650, 0.4350, 0.4350) with Z_{L} = 4.1001 and for Upper problem x^{*}_{U} = (2.1568, 0.8432, 0.8432)^{T} with Z_{U} = 34.4009. For this example x^{*}_{U} is nearer to x^{*}. Table 1 shows the numerical results.
In next example, we apply the same method to solve a QPP with indefinite Hessian matrix.
Example 2: consider the following indefinite QPP:
Subject to:
Where:
This example solved by an iterative method in Eq. 5, with optimal solution:
and the optimal value Here we solved this example: Z^{*}_{L}
= 24.0313 and the optimal solution:
Table 1: 
Numerical results for example 1 

Table 2: 
Numerical results for example 2 

Table 3: 
Numerical results for example 3 

This example solved by an iterative method in Eq. 5, with
optimal solution:
and the optimal value Z = 15.0313. Here we solved this example: Z^{*}_{L}
= 24.0313 and the optimal solution:
and also Z^{*}_{U} = 0.1 with x^{*}_{U} = (0.2, 0.4)^{T} In this example, x^{*}_{L} is more near to x^{*}. Table 2 shows the numerical results.
Example 3: Now consider the following QPP from Wang
et al. (2005)
So we have
This problem is solved in Wang et al. (2005).
The optimal solution can be seen in Table 3.
CONCLUSION
In this study, we proposed a novel approximation method for solving any QPP in general form. Nonconvex quadratic programming problems have widespread applications from both theoretical and practical viewpoints. But the general quadratic programming problem is an NPhard problem in global optimization, so our approximation method can be very useful. In this method, to compute the eigenvalues of the Hessian matrix, we can use numerical linear algebra methods. Work is in progress to apply this method for solving problems arising in engineering applications and also for Integer QPP.