Subscribe Now Subscribe Today
Research Article
 

New Method for Finding an Optimal Solution to Quadratic Programming Problems



A. Chikhaoui, B. Djebbar and R. Mekki
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

The aim of this study is to present a new method for finding an optimal solution to quadratic programming problems. The principle of the method is based on calculating the value of critical point. If the critical point belongs to the set of feasible solutions, so the optimal solution to our problem is the critical point itself. If the critical point is not at in the feasible solution set, a new feasible constraint set is built by a homographic transform, in such a way that the projection of the critical point of the objective function onto this set produces the exact solution to the problem on hand. It should be noted here that the objective function may be convex or not convex. On the other hand the search for the optimal solution is to find the hyper plane separating the convex and the critical point. Notice that one does not need to transform the quadratic problem into an equivalent linear one as in the numerical methods; the method is purely analytical and avoids the usage of initial solution. An algorithm computing the optimal solution of the concave function has given.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

A. Chikhaoui, B. Djebbar and R. Mekki, 2010. New Method for Finding an Optimal Solution to Quadratic Programming Problems. Journal of Applied Sciences, 10: 1627-1631.

DOI: 10.3923/jas.2010.1627.1631

URL: https://scialert.net/abstract/?doi=jas.2010.1627.1631
 
Received: January 14, 2010; Accepted: April 30, 2010; Published: June 26, 2010



INTRODUCTION

The methods for solving numerical optimization problems are nonlinear in essence iterative routines: a routine cannot find the exact solution in finite time. In fact the method, as treated by Delbos and Gilbert (2005), generates an infinite sequence (xt) of approximate solutions. The next iteration xt +1 is formed according to certain rules, based on local information of the problem, collected on the previous iteration which is a vector consisting of values of the objective, constraints and possibly the gradient or even higher derivatives of these functions. The optimization methods can be classified not only according to the types of problems they solve, but by type of local information they employ. From this point of view of information, the methods are divided into:

Routines zero-order, using only the values of the objective and constraints but not their derivatives
Routines-order, using the values and gradients of the objective and constraints
Routines second order, using the values, gradients and Hessians of the objective and constraints

Optimizing methods of quadratic functions such as the ones presented by Simonnard (1973) transform the initial quadratic problem into an equivalent linear program in order and apply the simplex. Achmanov (1984) only gave approximate solutions by building convergent series which approaches the real solution. De Werra et al. (2003) studied feasible direction method with the difficult problem to choose the direction and the parameter. Hillier and Lieberman (2005) transformed the initial problem into a linear one but by adding an important number of constraints.

Bierlaire (2006) has treated a simple example needs at least ten variables. In the method presented here, without any introduction of any variable, the exact solution is reached in the first attempt.

This analytical technique determines an exact optimal solution to the problem. It has to be stressed here that the new method is general to any quadratic optimization problem. The principle of the technique is to project the critical point to a new convex built from the set of feasible solutions. the critical point x* is transformed to the point y*; then y* is projected on one hyper plan separator An algorithm for finding an optimal solution is given in this study.

SPLITTING THE OBJECTIVE FUNCTION

Consider the following problem:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems
(1)

where, Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems, with a mxn matrix A and vector b of Image for - New Method for Finding an Optimal Solution to Quadratic Programming ProblemsThe coefficients αi and βi are any real numbers, so that the function f(x) is neither concave nor convex.

By so doing a decomposition for the function f(x) holds as: f(x) = φ(x)+Ψ(x).

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

where, the summation runs over all the indices i for βi<0.

and

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

where, the sum is extended for the indices i for βi≥0.

In the present study, we study the concave function φ(x). We give, here, the exact optimal solution by projecting the critical point Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsonto a new convex Ω'. If the critical point x* is inside the convex Ω, then the optimal solution holds exactly at this point.

We show in theorem 1, that Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems, where Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsis the projection of the point x* on the convex Ω'.

Let T: Ω⊂Rn→Ω'⊂Rn be the application that transforms the convex Ω into the convex Ω'. Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsis non zero, because the βi<0, for all i. Then it is conform.

OPTIMIZATION OF THE CONCAVE FUNCTION φ

Let Ω be a closed, bounded convex set of Rn. Let’s put:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Theorem: There exists a closed bounded convex set Ω' of Rn and a vector y0 = (y0i)i, such that the following statements are satisfied:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Proof: For every x∈Ω let ,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Then

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

But

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Therefore,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

can be written in the following form:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Let Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems, because Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Thus Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemshence property 1.

Because Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems for every y∈Ω'.

This implies thatImage for - New Method for Finding an Optimal Solution to Quadratic Programming Problems for every y∈Ω'. We have therefore Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems.

Because y∈Ω' then Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems; then Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems. Property 2 has demonstrated.

The vector y0 is the projection of the vector y* onto the new convex Ω'.

We have,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Hence property 3.

Remark 1: If x*∈Ω, then Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsand Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Remark 2: The transformation T: Ω⊂Rn→Ω'⊂Rn, for each x∈Ω, associating Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemshas as Jacobean matrix.

Its determinant is Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems. Then it is conform.

Remark 3: The convex Ω is bounded. We can restrict ourselves to the case αi≥0 for each i. In fact, let's assume that we have αixiixi2 with αi<0.

Let Image for - New Method for Finding an Optimal Solution to Quadratic Programming ProblemsTherefore,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

where Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Therefore, it is sufficient to replace xi with Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Algorithm of computing the optimal solution of the concave function φ.

Algorithm Ahmed Chikhaoui:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

In the algorithm we use the supporting hyperplane. Figure 1 shows a separator supporting hyperplane H1.

The following example traits the case x*∈ Ω. Hillier and Lieberman (2005).

Example 1:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems
Fig. 1: Supporting hyperplanes H1 and H2. The supporting hyperplane H2 does not separate x* of Ω while H1 is separate supporting hyperplane containing Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

In this case β1 = –9, then Ω <> Ω'.

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The critical point x* = (3, 3) ∈ Ω Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems. Note the rapid method.

The following example traits the case Ω = Ω’. Example treats by Dozzi (2004).

Example 2:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

In this case β1 = β2 = –1, then Ω <> Ω'.

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The critical point x* = (5/2, 5/2) ∉ Ω because Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemswe compute the projection of x*

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

where, a = (3, 1), b = 9

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Here,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The following example traits the case Ω <> Ω’.

Hillier and Lieberman (2005) has treated the following example. We can compare the two methods.

Example 3:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

in this case β1 = –1 and β = –2, then Ω≠Ω'.

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The critical point Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsbecause Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The critical point x* = (5/2, 2) ∉ Ω because Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemswe compute the projection of y* (y* is the translation of x*).

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

where, a = (3, 2), b = 6.

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Besides Image for - New Method for Finding an Optimal Solution to Quadratic Programming ProblemsThe projection of the point y* onto Ω' gives the following point y0:

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Note that the projection of point x* onto Ω gives the pointImage for - New Method for Finding an Optimal Solution to Quadratic Programming Problems


Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems
Fig. 2: The points: x*, y*, y0 and Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

Obviously,

Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems

The point Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problemsof convex Ω is the nearest point of the critical point x* but it is not the optimal solution of φ.

Figure 2 shows the transformation of point x* in y*, the projection of y* in y0 there and finally the calculation of the point Image for - New Method for Finding an Optimal Solution to Quadratic Programming Problems(optimal solution).

RESULTS AND DISCUSSION

After the decomposition of the objective function into two functions, one concave φ and the other convex Ψ where if the critical point belongs to omega (feasible constraint set), the optimal solution of f is the optimal solution of φ.

If the critical point does not belong to omega then the optimal solution of f is the projection of the critical point onto Ω' (new convex obtained by the translation T).

Convex Ω is replaced by a closed and lower bounded convex in Rn .

Unlike the previous techniques either analytical producing exact solution by going through linearization process or numerical producing approached solutions by Bierlaire (2006), Hillier and Lieberman (2005), Dong (2006) and De Klerk and Pasechnik (2007), the technique discussed here, produces the exact solution without going through linearization, thus it has enlarged the field of applications.

REFERENCES
1:  Achmanov, S., 1984. Programmation Lineaire Traduction Francaise. MIR, Moscou.

2:  Bierlaire, M., 2006. Introduction a l'Optimisation Differentiable. Presses-Polytechniques et Universitaires Romandes, Lanneusa, ISBN-10: 2-88074-669-8.

3:  De Werra, D., T.M. Liebling and J.F. Heche, 2003. Recherche Operationnelle Pour Ingenieur I. Presses Polytechniques et Universitaires Romandes, Lausanne, Switzerland.

4:  De Klerk, E. and D.V. Pasechnik, 2007. A linear programming reformulation of the standard quadratic optimization problem. J. Global Optim., 37: 75-84.
CrossRef  |  

5:  Delbos, F. and J.C. Gilbert, 2005. Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems. J. Convex Anal., 12: 45-69.
Direct Link  |  

6:  Hillier, F.S. and G.J. Lieberman, 2005. Introduction to Operations Research. 8th Edn., McGraw-Hill, New York, pp: 547-602.

7:  Dozzi, M., 2004. Methodes nonlineaires et stochastiques de la recherche operationnelle. http://www.iecn.u-nancy.fr/~dozzi/RechercheOperationnelle3/prognonlin.pdf.

8:  Simonnard, M., 1973. Programmation Lineaire: Techniques du Calcul Economique. 2nd Edn., Vol. 40, Dunod, Paris, pp: 32-45.

9:  Dong, S., 2006. Methods for Constrained Optimization. Springer, New York.

©  2021 Science Alert. All Rights Reserved