INTRODUCTION
Newton method proposed for the unconstrained minimization of strongly convex,
piecewise quadratic function arising from quadratic programs (Mangasarian,
1995). Attention to this approach, Golikov and Evtushenko
(2000) investigate the least 2norm formulation of a linear program and
for solving primal LP problem, use exterior penalty function. Minimization of
the exterior penalty function provided on an exact least 2norm solution to
the dual problem. Instead of penalty function, EvtushenkoYu
et al. (2004) proposed to use augmented lagrangian. They can obtain
an exact solution of dual problem and an exact least 2norm solution of primal
problem. Ketabchi et al. (2009) used minimization
of the augmented lagrangian function with the Armijo step size regulation and
they solved large scale linear programming problems. Here we use exterior penalty
function of dual for generate the exact least 2norm solution to the primal
problem in the standard form and we help strong wolf conditions for finding
stepsize in each iteration. Our approach is to find an exact least 2norm solution
of primal problem in the standard form and an exact solution of dual problem.
We give encouraging comparative test results with MATLAB on a class of generated
large scale linear program in the standard form and a subset of problems from
the Netlib collection.
Notation: For a vector x in the ndimensional real space R^{n} where x_{i+} = max{0, x_{i}}, I = 1, …, n, replaces negative components of the vector x, by zero and x_{*} = (x_{1*}, …, x_{n*}) where, x_{i*} if x_{i}>0, x_{i*} if x_{i}>0, x_{i*} = 0 if x_{i*}≤0, i = 1, …, n. The 2norm of x will be denoted by x and for a matrix AεR^{mxn}, A is the 2norm of A. a_{i} is the ith column of A and A_{S} is the sub matrix of A consisting of columns a_{i} where, iεS = {i = 1, …., n}.
THE EXACT LEAST 2NORM SOLUTION
Consider the primal linear programming problem in the standard form:
Together with its dual:
where, AεR^{m×n}, cεR^{n} and bεR^{m}
Let us denote by L = inf {c^{T}x: Ax = b, x≥0} the optimal value of the primal Problem (1). This study aims to develop an exact least 2norm solution of the primal problem (1). The minimization problem corresponding to it is; Find x that satisfies the following problem:
The standard method for finding a minimum norm solution of a convex program
is based on the Tikhonov regularization (Luca et al.,
1996). In order to solve (3), the Tikhonov regularization generates a sequence
of iterates {x_{k}}, with x_{k} being the unique solution of
the regularized program:
where, ε_{k} and the sequence {ε_{k}} tends to zero.
Due to the special properties of linear programs, it is known that a solution
of a single quadratic program (4) with a sufficiently small but positive parameter
ε_{k} already gives a solution of (3), (Fischer
and Kanzow, 1996). Thus, from now we focus to the solution of the following
program:
Since, the objective function of (5) is strongly convex, its solution is unique. The necessary and sufficient KarushKuhnTucker optimality conditions for problem (5) are that ∃wεR_{m} such that:
or equivalently:
Motivated by (8), let us introduce the function:
That f(w) is exterior penalty function of the dual linear program (2). Thus, the optimally conditions (7, 8) are in the form:
The primal least 2norm solution corresponding to (10) is: Find xεR^{m} such that:
which is precisely the necessary and sufficient condition that make w to be
a minimum solution of the parametric exterior penalty function f(w) for the
dual program (2). Hence, minimization of the function f(w) provides a solution:
x = 1/2(A^{T} wc)_{+} to ‘(5)’ If in addition
for some positive
it follows by (Mangasarian and Meyer, 1979), that x
is a least 2norm solution to the primal linear program (1). The following theorem
provides a criterion for the exact least 2norm solution.
Theorem 1: The exact least 2norm solution of linear program (1) is
obtained by x = 1/2(A^{T}wc), for any positive
such that ε for some positive ,
where, w is a solution of the dual penalty problem:
AN EXACT SOLUTION TO THE DUAL PROGRAM
A minimum solution of the exterior penalty function f(w) called w, will always
locate in the exterior of the boundary of feasible region for dual program (2)
(Rao, 1984). In order to generate an exact solution to
the dual program (2), we must force w to locate in the boundary of the corresponding.
For this to happen we must find a vector vεR^{m} such that:
Equivalently we can write:
From (7), since x_{j} = 1/ε ((a_{j})^{T} wcj)_{+} it follows that:
Therefore, the exact least 2norm solution x for the primal linear program (1) can be utilized to generate an exact solution to the dual program (2) by solving (12) and (12) yields an exact solution of the dual program (2) if we make the additional assumption that the sub matrix A_{S} has linearly independent rows.
GENERALIZED NEWTON METHOD
For the piecewise quadratic function f_{w} the ordinary Hessian does not exist because its gradient:
is not differentiable. However, one can define its generalized Hessian (Mangasarian,
2002), which is the m×m positive semidefinite symmetric matrix:
where, diag(A^{T} wc) denotes a nxn diagonal matrix with diagonal
elements ((a_{i})^{T} wc_{i})_{*} ,i = 1, …,,
n. The generalized Hessian (13) has many properties of the regular Hessian (Mangasarian,
2002). The matrix δ^{2} f(w) will be used to generate the Newton
direction, since the generalized Hessian may be singular. In fact the direction
that will be used is the following one:
where, σ is some small positive number.
ALGORITHM
In order to guarantee global convergence, we utilize a strong wolf stepsize:
• 
Choose w^{0}εR^{m} and give ε tol
and σ small positive numbers 
• 
For d^{i} = (δ^{2} f(w^{i})+σI)^{1} ∇f(w^{i}) and the strong wolf stepsize a_{i}: 
• 
If w^{i+1} = w^{i} + α_{i}d^{i}
, stop 
• 
Put x = 1/ε(A^{T} w^{i+1}c) and find
a solution v from (12) 
We state a convergence result for this algorithm now.
Theorem 2: Each accumulation point w* of the sequence w^{i} generated by above algorithm solves the exterior penalty problem:
The corresponding x* obtained by setting w to w^{i} in (7) is the exact
least 2norm solution to the primal program (1), provided ε is sufficiently
small. An exact solution v* to the dual linear program (2) is obtained from
(12), provided that the sub matrix A_{S} of A has linearly independent
rows (Mangasarian, 1995).
NUMERICAL TEST
Here, we present some numerical results for a subset of the problems given
in the Netlib collection (http://www.netlib.com)
and a subset of randomly generated problems. For the latter we use a MATLAB
mfile code1 that is a linear programming test problem generator:
m 
= 
input(m:);n = input(n:); d = input(d:) 
pl 
= 
inline(abs(x)+x)/2) 
a 
= 
sprand(m, n, d); A = 100*(A0.5* spones(A)) 
x 
= 
spdiags((sign)pl(rand(m, 1)rand(m.1)))), 0, m, m* (rand(m, 1))rand(m,
1:) 
b 
= 
A*x 
c 
= 
A'*u=spdiags((ones(n, 1)sign(pl(x))), 0, n, n)* 10* ones (n, 1) 
To test the proposed algorithm we solved a number of linear programs generated
by LP generator, a 3.06 GHz Pentium IV machine with 1 GB of memory running Windows
XPSP2 and compared it with Linprog in MATLAB 7.6. The starting vector used
in the all following examples is W^{0} = 0. Results are presented in
Table 1 where m and n show the size of the matrix A and d
stands for its density. The total time of computations in each example is given
in the last column of the Table. Ax*b, (x*)_{+},
(A^{t}v*c)_{+} show the accuracies
of feasible conditions of LP problems and b^{T}v*c^{T}x*
shows the accuracies of optimality conditions of LP problems. Analysis of the
Table 1 shows that proposed algorithm in this paper finds
normal solutions for large scale problems. It is also shown that for some problems,
Linprog gives no solution, where it states Out Of Memory (O.O.M).
Table 2 shows that strong Wolf conditions are an important toll in this algorithm. For some problems, the proposed algorithm without strong wolf conditions does not work well. We present a number of these problems in Table 2.
Various problems from Netlib solved by proposed algorithm and Linprog in MATLAB. The comparison between two methods is given in Table 3.
In the study, we use penalty function for finding least 2norm solution to
the primal problem. Problems were solved by fast generalized Newton method and
we investigated its finite global convergence with the strong wolf step size
regulation.
Table 1: 
Comparison between the proposed algorithm and the Linprog
solver for various randomly generated problem for different size and densities 

Table 2: 
Comparison between the proposed algorithm without strong wolf
and with strong wolf for various randomly generated problem for different
size and densities 

Table 3: 
Comparison between the proposed algorithm and the Linprog
solver for various problems from Netlib 

The method is analyzed for solution and convergence. A subset of problems
from the Netlib collection and a subset of generated large scale linear programs
are solved using this method. Comparison between two methods shows that proposed
algorithm in this study finds normal solutions for large scale problems and
Netlib problems. It is also shown that for some problems, MATLAB gives no solution,
where it states out of memory.