HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2009 | Volume: 9 | Issue: 18 | Page No.: 3402-3406
DOI: 10.3923/jas.2009.3402.3406
Efficient Hybrid Algorithm for Solving Large Scale Constrained Linear Programming Problems
H. Navidi, A. Malek and P. Khosravi

Abstract: The aim of this study is to find an exact least 2-norm solution to a primal constrained linear programming problem in the standard form. Moreover, we can generate an exact solution to the dual programming problem using the exact least 2-norm solution to the primal problem. The proposed algorithm is suitable for solving linear programming problems with a very large number of variables (105) and a very large number of constraints (106). The exact least 2-norm solution to the primal problem is based on the minimization of exterior penalty functions dual of the dual programming problem. In practice for minimization, we use generalized Newton method and strong wolf conditions in order to find a corresponding suitable step size. This hybrid algorithm converges to the correct optimal solution independent of the values of the given starting point. It is shown that there are some problems that without using strong wolf conditions cannot be solved. The new algorithm can obtain least solution in comparison with MATLAB. Numerical results for a subset of problems from the Netlib collection and a subset of generated large scale linear programs are given. The hybrid algorithm is easy to implement and computationally very efficient. Comparisons are made with available literature.

Fulltext PDF Fulltext HTML

How to cite this article
H. Navidi, A. Malek and P. Khosravi, 2009. Efficient Hybrid Algorithm for Solving Large Scale Constrained Linear Programming Problems. Journal of Applied Sciences, 9: 3402-3406.

Keywords: least 2-norm solution, penalty function, Large scale linear programming, newton method and strong wolf conditions

INTRODUCTION

Newton method proposed for the unconstrained minimization of strongly convex, piecewise quadratic function arising from quadratic programs (Mangasarian, 1995). Attention to this approach, Golikov and Evtushenko (2000) investigate the least 2-norm formulation of a linear program and for solving primal LP problem, use exterior penalty function. Minimization of the exterior penalty function provided on an exact least 2-norm solution to the dual problem. Instead of penalty function, Evtushenko-Yu et al. (2004) proposed to use augmented lagrangian. They can obtain an exact solution of dual problem and an exact least 2-norm solution of primal problem. Ketabchi et al. (2009) used minimization of the augmented lagrangian function with the Armijo step size regulation and they solved large scale linear programming problems. Here we use exterior penalty function of dual for generate the exact least 2-norm solution to the primal problem in the standard form and we help strong wolf conditions for finding step-size in each iteration. Our approach is to find an exact least 2-norm solution of primal problem in the standard form and an exact solution of dual problem. We give encouraging comparative test results with MATLAB on a class of generated large scale linear program in the standard form and a subset of problems from the Netlib collection.

Notation: For a vector x in the n-dimensional real space Rn where xi+ = max{0, xi}, I = 1, …, n, replaces negative components of the vector x, by zero and x* = (x1*, …, xn*) where, xi* if xi>0, xi* if xi>0, xi* = 0 if xi*≤0, i = 1, …, n. The 2-norm of x will be denoted by ||x|| and for a matrix AεRmxn, ||A|| is the 2-norm of A. ai is the ith column of A and AS is the sub matrix of A consisting of columns ai where, iεS = {i = 1, …., n}.

THE EXACT LEAST 2-NORM SOLUTION

Consider the primal linear programming problem in the standard form:

(1)

Together with its dual:

(2)

where, AεRm×n, cεRn and bεRm

Let us denote by L = inf {cTx: Ax = b, x≥0} the optimal value of the primal Problem (1). This study aims to develop an exact least 2-norm solution of the primal problem (1). The minimization problem corresponding to it is; Find x that satisfies the following problem:

(3)

The standard method for finding a minimum norm solution of a convex program is based on the Tikhonov regularization (Luca et al., 1996). In order to solve (3), the Tikhonov regularization generates a sequence of iterates {xk}, with xk being the unique solution of the regularized program:

(4)

where, εk and the sequence {εk} tends to zero. Due to the special properties of linear programs, it is known that a solution of a single quadratic program (4) with a sufficiently small but positive parameter εk already gives a solution of (3), (Fischer and Kanzow, 1996). Thus, from now we focus to the solution of the following program:

(5)

Since, the objective function of (5) is strongly convex, its solution is unique. The necessary and sufficient Karush-Kuhn-Tucker optimality conditions for problem (5) are that ∃wεRm such that:

(6)

or equivalently:

(7)

(8)

Motivated by (8), let us introduce the function:

(9)

That f(w) is exterior penalty function of the dual linear program (2). Thus, the optimally conditions (7, 8) are in the form:

(10)

The primal least 2-norm solution corresponding to (10) is: Find xεRm such that:

(11)

which is precisely the necessary and sufficient condition that make w to be a minimum solution of the parametric exterior penalty function f(w) for the dual program (2). Hence, minimization of the function f(w) provides a solution: x = 1/2(AT w-c)+ to ‘(5)’ If in addition for some positive it follows by (Mangasarian and Meyer, 1979), that x is a least 2-norm solution to the primal linear program (1). The following theorem provides a criterion for the exact least 2-norm solution.

Theorem 1: The exact least 2-norm solution of linear program (1) is obtained by x = 1/2(ATw-c), for any positive such that ε for some positive , where, w is a solution of the dual penalty problem:

AN EXACT SOLUTION TO THE DUAL PROGRAM

A minimum solution of the exterior penalty function f(w) called w, will always locate in the exterior of the boundary of feasible region for dual program (2) (Rao, 1984). In order to generate an exact solution to the dual program (2), we must force w to locate in the boundary of the corresponding. For this to happen we must find a vector vεRm such that:

Equivalently we can write:

From (7), since xj = 1/ε ((aj)T w-cj)+ it follows that:

(12)

Therefore, the exact least 2-norm solution x for the primal linear program (1) can be utilized to generate an exact solution to the dual program (2) by solving (12) and (12) yields an exact solution of the dual program (2) if we make the additional assumption that the sub matrix AS has linearly independent rows.

GENERALIZED NEWTON METHOD

For the piecewise quadratic function fw the ordinary Hessian does not exist because its gradient:

is not differentiable. However, one can define its generalized Hessian (Mangasarian, 2002), which is the m×m positive semi-definite symmetric matrix:

(13)

where, diag(AT w-c) denotes a nxn diagonal matrix with diagonal elements ((ai)T w-ci)* ,i = 1, …,, n. The generalized Hessian (13) has many properties of the regular Hessian (Mangasarian, 2002). The matrix δ2 f(w) will be used to generate the Newton direction, since the generalized Hessian may be singular. In fact the direction that will be used is the following one:

where, σ is some small positive number.

ALGORITHM

In order to guarantee global convergence, we utilize a strong wolf step-size:

Choose w0εRm and give ε tol and σ small positive numbers
For di = -(δ2 f(wi)+σI)-1 ∇f(wi) and the strong wolf step-size ai:
If wi+1 = wi + αidi , stop
Put x = 1/ε(AT wi+1-c) and find a solution v from (12)

We state a convergence result for this algorithm now.

Theorem 2: Each accumulation point w* of the sequence wi generated by above algorithm solves the exterior penalty problem:

The corresponding x* obtained by setting w to wi in (7) is the exact least 2-norm solution to the primal program (1), provided ε is sufficiently small. An exact solution v* to the dual linear program (2) is obtained from (12), provided that the sub matrix AS of A has linearly independent rows (Mangasarian, 1995).

NUMERICAL TEST

Here, we present some numerical results for a subset of the problems given in the Netlib collection (http://www.netlib.com) and a subset of randomly generated problems. For the latter we use a MATLAB m-file code1 that is a linear programming test problem generator:

m = input(m:);n = input(n:); d = input(d:)
pl = inline(abs(x)+x)/2)
a = sprand(m, n, d); A = 100*(A-0.5* spones(A))
x = spdiags((sign)pl(rand(m, 1)-rand(m.1)))), 0, m, m* (rand(m, 1))-rand(m, 1:)
b = A*x
c = A'*u=spdiags((ones(n, 1)-sign(pl(x))), 0, n, n)* 10* ones (n, 1)

To test the proposed algorithm we solved a number of linear programs generated by LP generator, a 3.06 GHz Pentium IV machine with 1 GB of memory running Windows XP-SP2 and compared it with Linprog in MATLAB 7.6. The starting vector used in the all following examples is W0 = 0. Results are presented in Table 1 where m and n show the size of the matrix A and d stands for its density. The total time of computations in each example is given in the last column of the Table. ||Ax*-b||, ||(-x*)+||, ||(Atv*-c)+|| show the accuracies of feasible conditions of LP problems and |bTv*-cTx*| shows the accuracies of optimality conditions of LP problems. Analysis of the Table 1 shows that proposed algorithm in this paper finds normal solutions for large scale problems. It is also shown that for some problems, Linprog gives no solution, where it states Out Of Memory (O.O.M).

Table 2 shows that strong Wolf conditions are an important toll in this algorithm. For some problems, the proposed algorithm without strong wolf conditions does not work well. We present a number of these problems in Table 2.

Various problems from Netlib solved by proposed algorithm and Linprog in MATLAB. The comparison between two methods is given in Table 3.

In the study, we use penalty function for finding least 2-norm solution to the primal problem. Problems were solved by fast generalized Newton method and we investigated its finite global convergence with the strong wolf step size regulation.

Table 1: Comparison between the proposed algorithm and the Linprog solver for various randomly generated problem for different size and densities

Table 2: Comparison between the proposed algorithm without strong wolf and with strong wolf for various randomly generated problem for different size and densities

Table 3: Comparison between the proposed algorithm and the Linprog solver for various problems from Netlib

The method is analyzed for solution and convergence. A subset of problems from the Netlib collection and a subset of generated large scale linear programs are solved using this method. Comparison between two methods shows that proposed algorithm in this study finds normal solutions for large scale problems and Netlib problems. It is also shown that for some problems, MATLAB gives no solution, where it states out of memory.

REFERENCES

  • Evtushenko-Yu, G., A.I. Golikov, S. Ketabchi and N. Mollaverdi, 2004. Augmented Lagrangian Methods for Linear Programming. In: Dynamics of Non-Homogeneous Systems, Popkov, Y.S. (Ed.). Russian Academy of Sciences Institute for System Analysis, Russia, pp: 101-106


  • Fischer, A. and C. Kanzow, 1996. On finite termination of an iterative method for linear complementarity problems. Math. Programm., 74: 279-292.
    CrossRef    Direct Link    


  • Golikov A.I. and Y.G. Evtushenko, 2000. Searching for normal solutions in linear programming prolems. Comput. Math. Math. Phys., 40: 1694-1714.
    Direct Link    


  • Ketabchi, S., H. Navidi and N. Mollaverdi, 2009. Modified Lagrangian function and Armijo rule for normal solution of the LP problem. Far East J. Applied Math., 35: 103-111.
    Direct Link    


  • Luca, T.D., F. Facchinei and C. Kanzow, 1996. A semismooth equation approach to the solution of nonlinear complementarily problems. Math. Programm., 75: 407-439.
    Direct Link    


  • Mangasarian, O.L., 1995. Parallel gradient distribution in unconstrained optimization. SIAM J. Control Optimizat., 33: 1916-1925.
    Direct Link    


  • Mangasarian, O.L., 2002. A finite newton method for classification problems. Optimizat. Methods Software, 17: 913-929.
    Direct Link    


  • Mangasarian, O.L. and R.R. Meyer, 1979. Nonlinear perturbation of linear programs. SIAM J. Control Optimizat., 17: 745-752.


  • Rao, S.S., 1984. Optimization: Theory and Applications. 2nd Edn., Wiley Eastern, New York, ISBN: 0-470-27483-2

  • © Science Alert. All Rights Reserved