HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2010 | Volume: 10 | Issue: 15 | Page No.: 1627-1631
DOI: 10.3923/jas.2010.1627.1631
New Method for Finding an Optimal Solution to Quadratic Programming Problems
A. Chikhaoui, B. Djebbar and R. Mekki

Abstract: The aim of this study is to present a new method for finding an optimal solution to quadratic programming problems. The principle of the method is based on calculating the value of critical point. If the critical point belongs to the set of feasible solutions, so the optimal solution to our problem is the critical point itself. If the critical point is not at in the feasible solution set, a new feasible constraint set is built by a homographic transform, in such a way that the projection of the critical point of the objective function onto this set produces the exact solution to the problem on hand. It should be noted here that the objective function may be convex or not convex. On the other hand the search for the optimal solution is to find the hyper plane separating the convex and the critical point. Notice that one does not need to transform the quadratic problem into an equivalent linear one as in the numerical methods; the method is purely analytical and avoids the usage of initial solution. An algorithm computing the optimal solution of the concave function has given.

Fulltext PDF Fulltext HTML

How to cite this article
A. Chikhaoui, B. Djebbar and R. Mekki, 2010. New Method for Finding an Optimal Solution to Quadratic Programming Problems. Journal of Applied Sciences, 10: 1627-1631.

Keywords: interpolation, convex, hyper plane separator, extreme and Optimization

INTRODUCTION

The methods for solving numerical optimization problems are nonlinear in essence iterative routines: a routine cannot find the exact solution in finite time. In fact the method, as treated by Delbos and Gilbert (2005), generates an infinite sequence (xt) of approximate solutions. The next iteration xt +1 is formed according to certain rules, based on local information of the problem, collected on the previous iteration which is a vector consisting of values of the objective, constraints and possibly the gradient or even higher derivatives of these functions. The optimization methods can be classified not only according to the types of problems they solve, but by type of local information they employ. From this point of view of information, the methods are divided into:

Routines zero-order, using only the values of the objective and constraints but not their derivatives
Routines-order, using the values and gradients of the objective and constraints
Routines second order, using the values, gradients and Hessians of the objective and constraints

Optimizing methods of quadratic functions such as the ones presented by Simonnard (1973) transform the initial quadratic problem into an equivalent linear program in order and apply the simplex. Achmanov (1984) only gave approximate solutions by building convergent series which approaches the real solution. De Werra et al. (2003) studied feasible direction method with the difficult problem to choose the direction and the parameter. Hillier and Lieberman (2005) transformed the initial problem into a linear one but by adding an important number of constraints.

Bierlaire (2006) has treated a simple example needs at least ten variables. In the method presented here, without any introduction of any variable, the exact solution is reached in the first attempt.

This analytical technique determines an exact optimal solution to the problem. It has to be stressed here that the new method is general to any quadratic optimization problem. The principle of the technique is to project the critical point to a new convex built from the set of feasible solutions. the critical point x* is transformed to the point y*; then y* is projected on one hyper plan separator An algorithm for finding an optimal solution is given in this study.

SPLITTING THE OBJECTIVE FUNCTION

Consider the following problem:

(1)

where, , with a mxn matrix A and vector b of The coefficients αi and βi are any real numbers, so that the function f(x) is neither concave nor convex.

By so doing a decomposition for the function f(x) holds as: f(x) = φ(x)+Ψ(x).

where, the summation runs over all the indices i for βi<0.

and

where, the sum is extended for the indices i for βi≥0.

In the present study, we study the concave function φ(x). We give, here, the exact optimal solution by projecting the critical point onto a new convex Ω'. If the critical point x* is inside the convex Ω, then the optimal solution holds exactly at this point.

We show in theorem 1, that , where is the projection of the point x* on the convex Ω'.

Let T: Ω⊂Rn→Ω'⊂Rn be the application that transforms the convex Ω into the convex Ω'. is non zero, because the βi<0, for all i. Then it is conform.

OPTIMIZATION OF THE CONCAVE FUNCTION φ

Let Ω be a closed, bounded convex set of Rn. Let’s put:

Theorem: There exists a closed bounded convex set Ω' of Rn and a vector y0 = (y0i)i, such that the following statements are satisfied:

Proof: For every x∈Ω let ,

Then

But

Therefore,

can be written in the following form:

Let , because

Thus hence property 1.

Because for every y∈Ω'.

This implies that for every y∈Ω'. We have therefore .

Because y∈Ω' then ; then . Property 2 has demonstrated.

The vector y0 is the projection of the vector y* onto the new convex Ω'.

We have,

Hence property 3.

Remark 1: If x*∈Ω, then and

Remark 2: The transformation T: Ω⊂Rn→Ω'⊂Rn, for each x∈Ω, associating has as Jacobean matrix.

Its determinant is . Then it is conform.

Remark 3: The convex Ω is bounded. We can restrict ourselves to the case αi≥0 for each i. In fact, let's assume that we have αixiixi2 with αi<0.

Let Therefore,

where

Therefore, it is sufficient to replace xi with

Algorithm of computing the optimal solution of the concave function φ.

Algorithm Ahmed Chikhaoui:

In the algorithm we use the supporting hyperplane. Figure 1 shows a separator supporting hyperplane H1.

The following example traits the case x*∈ Ω. Hillier and Lieberman (2005).

Example 1:


Fig. 1: Supporting hyperplanes H1 and H2. The supporting hyperplane H2 does not separate x* of Ω while H1 is separate supporting hyperplane containing

In this case β1 = –9, then Ω <> Ω'.

The critical point x* = (3, 3) ∈ Ω . Note the rapid method.

The following example traits the case Ω = Ω’. Example treats by Dozzi (2004).

Example 2:

In this case β1 = β2 = –1, then Ω <> Ω'.

The critical point x* = (5/2, 5/2) ∉ Ω because we compute the projection of x*

where, a = (3, 1), b = 9

Here,

The following example traits the case Ω <> Ω’.

Hillier and Lieberman (2005) has treated the following example. We can compare the two methods.

Example 3:

in this case β1 = –1 and β = –2, then Ω≠Ω'.

The critical point because

The critical point x* = (5/2, 2) ∉ Ω because we compute the projection of y* (y* is the translation of x*).

where, a = (3, 2), b = 6.

Besides The projection of the point y* onto Ω' gives the following point y0:

Note that the projection of point x* onto Ω gives the point

Fig. 2: The points: x*, y*, y0 and

Obviously,

The point of convex Ω is the nearest point of the critical point x* but it is not the optimal solution of φ.

Figure 2 shows the transformation of point x* in y*, the projection of y* in y0 there and finally the calculation of the point (optimal solution).

RESULTS AND DISCUSSION

After the decomposition of the objective function into two functions, one concave φ and the other convex Ψ where if the critical point belongs to omega (feasible constraint set), the optimal solution of f is the optimal solution of φ.

If the critical point does not belong to omega then the optimal solution of f is the projection of the critical point onto Ω' (new convex obtained by the translation T).

Convex Ω is replaced by a closed and lower bounded convex in Rn .

Unlike the previous techniques either analytical producing exact solution by going through linearization process or numerical producing approached solutions by Bierlaire (2006), Hillier and Lieberman (2005), Dong (2006) and De Klerk and Pasechnik (2007), the technique discussed here, produces the exact solution without going through linearization, thus it has enlarged the field of applications.

REFERENCES

  • Achmanov, S., 1984. Programmation Lineaire Traduction Francaise. MIR, Moscou


  • Bierlaire, M., 2006. Introduction a l'Optimisation Differentiable. Presses-Polytechniques et Universitaires Romandes, Lanneusa, ISBN-10: 2-88074-669-8


  • De Werra, D., T.M. Liebling and J.F. Heche, 2003. Recherche Operationnelle Pour Ingenieur I. Presses Polytechniques et Universitaires Romandes, Lausanne, Switzerland
    Direct Link    


  • De Klerk, E. and D.V. Pasechnik, 2007. A linear programming reformulation of the standard quadratic optimization problem. J. Global Optim., 37: 75-84.
    CrossRef    


  • Delbos, F. and J.C. Gilbert, 2005. Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems. J. Convex Anal., 12: 45-69.
    Direct Link    


  • Hillier, F.S. and G.J. Lieberman, 2005. Introduction to Operations Research. 8th Edn., McGraw-Hill, New York, pp: 547-602


  • Dozzi, M., 2004. Methodes nonlineaires et stochastiques de la recherche operationnelle. http://www.iecn.u-nancy.fr/~dozzi/RechercheOperationnelle3/prognonlin.pdf.


  • Simonnard, M., 1973. Programmation Lineaire: Techniques du Calcul Economique. 2nd Edn., Vol. 40, Dunod, Paris, pp: 32-45


  • Dong, S., 2006. Methods for Constrained Optimization. Springer, New York

  • © Science Alert. All Rights Reserved