HOME JOURNALS CONTACT

Asian Journal of Applied Sciences

Year: 2009 | Volume: 2 | Issue: 6 | Page No.: 499-510
DOI: 10.3923/ajaps.2009.499.510
Optimization of a Quadratic Function under its Canonical Form
A. Chikhaoui, B. Djebbar, A. Belabbaci and A. Mokhtari

Abstract: The aim of this study is to find the exact solution of a quadratic programming problem with linear constraints of an objective quadratic function written in the canonical form. This study describes a new method which is based on splitting the objective function into the sum of two functions, one concave and the other convex; a new feasible constraint set is built by a homographic transform, in such away that the projection of the critical point of the objective function onto this set, produces the exact solution to the problem on hand. Notice that one does not need to transform the quadratic problem into an equivalent linear one as in the numerical methods; the method is purely analytical and avoids the usage of initial solution. The technique is simple and allows us to find the coefficients of the convex function while moving from one summit to another. The proved theorem is valid for any bound, closed and convex domain; it may be applied to a large number of optimization problems. The obtained results are of great importance to solve separable programming cases. 

Fulltext PDF Fulltext HTML

How to cite this article
A. Chikhaoui, B. Djebbar, A. Belabbaci and A. Mokhtari, 2009. Optimization of a Quadratic Function under its Canonical Form. Asian Journal of Applied Sciences, 2: 499-510.

Keywords: canonical form, convexity, extrema, Quadratic optimization and conform transformation

INTRODUCTION

Optimizing methods of quadratic functions such as the ones presented by Simonnard (1973) transform the initial quadratic problem into an equivalent linear program in order and apply the simplex. Achmanov (1984) only gives approximate solutions by building convergent series which approaches the real solution. De Werra (2003) transform the initial problem into a linear one but by adding an important number of constraints. Delbos and Ch (2003) considered, an augmented Lagrange algorithm. They showed that by sufficiently increasing the augmentation parameter, the associated constraint value globally converges linearly towards zero. Dozzi (2004) has also linearized the quadratic problem by adding more constraints to apply to the tables of the simplex. Dong (2006) has implemented several optimization methods; his goal was to find the global optima by comparing different methods. A well known algorithms for solving quadratic optimization problems are gradient-type such as the method of Steepest Descent.

Bierlaire (2006) has treated the following simple example:

The used method for solving this problem needs at least ten variables. In the method presented here, without any introduction of any variable, the exact solution is reached in the first attempt.

This analytical technique determines an exact optimal solution to the problem. It has to be stressed here that the new method is general to any quadratic optimization problem. The principle of the technique is to project the critical point to a new convex built from the set of feasible solutions.

A second ingredient is to study the convex function. The optimal solutions are the extreme points which are always the summits. Thus it is only necessary to go from one summit to another and choose the summit which produces the optimal solution. By moving from one summit to the neighbouring, the form of the convex function changes and new terms containing the double product appear (Belabbaci, 2007); thus a simple programming technique to compute those coefficients is presented. An example to illustrate it is given in this study.

DECOMPOSITION OF THE OBJECTIVE FUNCTION

Consider the following problem:

(1)

where, Ω = {xεRn:x≥0 and Ax≤b}, with a mxn matrix A and vector b of Rm+. It is easy to see that one can restrict our study to the following case:

where, the coefficients αi and βi are any real numbers, so, that the function f(x) is neither concave nor convex. First, let’s see how one can optimize the function f(x).

To do this, let:

where, the summation runs over all the indices i for βi<0 and

where, the sum is extended for the indices i for βi≥0. By so doing a decomposition for the function f(x) holds as follow: f(x) = φ(x)+Ψ(x).

In the first part of the present work we begin with the study of the concave function φ(x).

We give, here, the exact optimal solution by projecting the critical point x* = (-αi/2βi)i onto a new convex Ω. If the critical point x* is inside the convex Ω, then the optimal solution holds exactly at this point (Remark 1). Otherwise, the direct projection of x* upon Ω gives only an approached solution (Example 2).

We show in theorem 1, that , where, is the projection of the point x* on the convex Ω'.

Let T: Ω⊂Rn →Ω⊂Rn be the application that transforms the convex Ω into the convex Ω'. Is non zero, because the βi<0, for all i. Then it is conform.

OPTIMIZATION OF THE CONCAVE FUNCTION φ
Let Ω be a closed, bounded convex set of Rn. Let’s put:

Theorem: There exists a closed bounded convex set Ω' of Rn and a vector y0 = (y0i)iεΩ', such that the following statements are satisfied:

Proof:
For every xεΩ, let

Then

But

Therefore:

can be written in the following form:

Hence property (3).

Remark 1: If

Remark 2: The transformation T: Ω⊂Rn→Ω'⊂Rn, for each xεΩ, associating T(x) = Λx, Λ = has as Jacobean matrix:

Its determinant is . Then it is conform.

Remark 3: The convex Ω is bounded. We can restrict ourselves to the case αi≥0 for each i . In fact, let's assume that we have αixiixi2 with αi<0.
Let, Max (xi) = δi*, yi = δi*-xi≥0. Therefore:

where αi = -α-2δi* βi≥0 and Ki = αiδi*iδi*2.

Therefore, it is sufficient to replace xi with δi*-yi and αixiixi2 with βiyi2iyi+Ki.

Example 1:

In this case β1 = β2 = -1, then Ω = Ω'.

The critical point

where, a = (3, 1), b = 9.

Here

Example 2:

In this case β1 = -1 and β2 = -2, then Ω≠Ω.

The critical point because

where, a = (3, 2), b = 6

Besides The projection of this point onto Ω’ gives the following point y0:

Note that the projection of point x* onto Ω gives the point ((16/13), (15/13)) and φ((16/13), (15/13)) = 11,207.

Obviously

The point ((16/13), (15/13)) of convex Ω is the nearest point of the critical point x* but it is not the optimal solution of φ.

Example 3:

We have φ(x3) = 3x3-x23 its critical point is x* = (3/2) and

On the other hand.

Ψis a convex function and its optimal solution is an extreme point (summit of Ω).

(Belabbaci, 2007). But

Therefore Max

Hence,

Remark 4: If we put ;

THE NEW FORM OF THE FUNCTION

Let's compute the new coefficients α1', βi' of the function Ψ at the time of the passage from an extreme point to a neighbouring extreme point.

Let i0 the index of the incoming vector and j0 the one of the entering vector. From the row of the pivot, we have:


The value of the function becomes then

We replace by its value:

where but

then,

We obtain the form of the coefficients for the first phase:

The form of the function becomes then:

Method for computing :

The expression

with may be computed in the following way: the Cartesian product of the set {aio.j} excluding the diagonal and the pivot row, using the following Table 1:

The expression is equivalent to the following expression:

Hence, the new form of the coefficients for any phase is:

for and , where:

Remark 5: Generally .
, will be taken from the table of the Cartesian product of the pivot row multiplied by:

For more details show (Belabbaci, 2007). For the matrix of the constraints, these coefficients will be changed in the same way as those of the simplex (Dantzig, 1963).

Table 1: Computing y*

Example 4: Let the following quadratic program (standardized):

The function is

We will pass from the first extreme point (0, 0, 0) to the second one (6, 0, 0).

Table 2 contents the initial values of α, β, tacked from (P) and we compute the value of θ, Δ with the expressions:

The Table 3 contents the Cartesian product of the set (ai0.j) = (1, -1, 0, 1, 0) neglecting the diagonal and the pivot line.

because
Here, and

The function will be

To pass to the next extreme point (8, 2, 0), Table 4 gives the new values of α, β, θ, Δ.

The Table 5 contents the Cartesian product of the set (ai0.j) neglecting the diagonal and the pivot line.


Table 2: Initial values of α, β, θ, Δ

Table 3: Computing γ*24

Table 4: Computing values α, β, θ, Δ for the secondly phase

Table 5: Computing γ*45

Then because in previouse expression of

Here,

The function will be

Passage to the next extreme point (2, 5, 9)

Table 6 gives the new values of α, β, θ, Δ.

The Table 7 contents the Cartesian product of the set (ai0.j) = (0, 0, 1, 1,0) neglecting the diagonal and the pivot line ;

Then

Table 6: Computing values of α, β, θ, Δ for the third phase

Table 7: Computing γ*35

Table 8: Computing values of α, β, θ, Δ for the last phase

because in previous expression of Ψ and a15 = 0.

Note that and then

The function finally appears in the form:

Passage to the next extreme point. Table 8 gives the new values of α, β, θ, Δ.

All Δ < 0 stop. The last value of Ψ(x) is the optimum.

Remark 5: We use the same technique to compute the new coefficients of f during the passage from one extreme point to the neighbouring one.

DISCUSSION

After the decomposition of the objective function into two functions, one concave φ and the other convex Ψ where if the critical point belongs to omega (feasible constraint set), the optimal solution of f is the optimal solution of φ.

If the critical point does not belong to omega then the optimal solution of f is the projection of the critical point onto Ω' (new convex obtained by the translation T).

The value of the convex function Ψ is computed as discussed in part 3.

This study has also showed the technique of computing the value of the function Ψ while going from one summit to another.

By passing from one summit to another, the double product appears and this requires a technique to compute it. This was presented in earlier. The theorem is still valid if the convex Ω is replaced by a closed and lower bounded convex in Rn.

Unlike the previous techniques either analytical producing exact solution by going through linearization process or numerical producing approached solutions-Bierlaire (2006) or Dong (2006), the technique discussed here, produces the exact solution without going through linearization, thus it has enlarged the field of applications.

REFERENCES

  • Achmanov, S., 1984. Programmation Lineaire Traduction Francaise. MIR, Moscou


  • Belabbaci, A., 2007. La programmation quadratique et son application a la programmation separable. Memoire de Magistere, Universite de Laghouat.


  • Bierlaire, M., 2006. Introduction a l'Optimisation Differentiable. Presses-Polytechniques et Universitaires Romandes, Lanneusa, ISBN-10: 2-88074-669-8


  • Dantzig, G., 1963. Linear Programming and Extensions. Princeton University Press, Princeton, NJ., pp: 10


  • De Werra, D., T.M. Liebling and J.F. Heche, 2003. Recherche Operationnelle Pour Ingenieur I. Presses Polytechniques et Universitaires Romandes, Lausanne, Switzerland
    Direct Link    


  • Delbos, F. and J.C. Gilbert, 2005. Global linear convergence of an augmented Lagrangian algorithm for solving convex quadratic optimization problems. J. Convex Anal., 12: 45-69.
    Direct Link    


  • Dozzi, M., 2004. Methodes nonlineaires et stochastiques de la recherche operationnelle. http://www.iecn.u-nancy.fr/~dozzi/RechercheOperationnelle3/prognonlin.pdf.


  • Simonnard, M., 1973. Programmation lineaire techniques du calcul economique 2. Extensions, Dunod. Paris, 40: 32-45.


  • Dong, S., 2006. Methods for Constrained Optimization. Springer, New York

  • © Science Alert. All Rights Reserved