Subscribe Now Subscribe Today
Research Article
 

An SQP Method for Solving the Nonlinear Minimax Problem



Zhibin Zhu
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

In this study, a new algorithm is presented to solve the following nonlinear minimax problem


This algorithm belongs to the sequential quadratic programming (SQP) type methods. At each iteration, the search direction d is obtained by solving one quadratic programming according to the K-T condition of the minimax problem. When d is equal to zero, then the corresponding iteration point x is a K-T point, otherwise, d is a descent direction. Unlike the SQP type algorithms for nonlinear programming, the direction d doesn`t induce any Maratos like effect. A particular linear search with above-mentioned direction assure global convergence as well as superlinear convergence. Numerical results to date suggest the resulting algorithm is effective.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Zhibin Zhu , 2004. An SQP Method for Solving the Nonlinear Minimax Problem. Journal of Applied Sciences, 4: 498-507.

DOI: 10.3923/jas.2004.498.507

URL: https://scialert.net/abstract/?doi=jas.2004.498.507

INTRODUCTION

The minimax optimization problem can be stated as

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(1)

where fj(j=1~m) are real-valued functions defined on Rn.

The objective function Mf(x) has discontinuous first partial derivatives at points where two or more of the functions fj(x) are equal to Mf even if fj(x)(j=1~m) have continuous first partial derivatives. Thus we can not use directly the well known gradient methods to minimizeFor excellent works on this topic and its applications was reported[2-7].

The minimax problem (1) is equivalent to the following nonlinear programming:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(2)

According to (2), the K-T condition of (1) is obtained as follows:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(3)

where:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

The active set is denoted as follows:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(4)

A lot of approaches have been proposed for solving the problem (1), most of them transform the minimax problem into the nonlinear programming problem (2) and solves it by well-established methods. Vardi[8] uses some slack variables to handle inequality constraints of (2), then solves only equality-constrained minimization problem taking advantage of trust-region strategy.

Charalambous and Conn[9] propose an algorithm to solve the minimax problem directly, in which two distinct search directions are necessary to compute: The first, the horizontal direction, attempts to reduce Mf(x) whilst, at the same time, keeping those functions whose values are close to Mf(x), approximately equal. The second, the vertical direction, amounts to attempting to decrease the error to within which those functions are equal to Mf(x) by means of linearization. Then, under assumptions that the minimax solution is unique, that fj(j=1~m) are convex and twice differentiable and that the strict complementarity condition is satisfied, global convergence is proven. However, computational effort is large and the rate of convergence is at most linear.

Due to its superlinear convergence rate, the sequential quadratic programming (SQP) methods are considered among the most effective methods for solving nonlinear programming problems. A new algorithm is proposed by Zhou and Tits[10] with taking advantage of the idea of SQP algorithms to solve the problem (1). There, in order to avoid the Maratos effect, it takes a nonmonotone line search along a direction d which is obtained by solving a QP subproblem. However, it is observed that, with such a line search and the direction d, it may not ensure that the full step of one is accepted. For this reason, it is necessary to perform a nonmonotone arc search along Image for - An SQP Method for Solving the Nonlinear Minimax Problembased on a correction which is obtained by solving another QP subproblem. Moreover, its convergence rate is two-step supperlinear.

A SQP type algorithm is proposed by Xue[11] to solve the problem (1). However, in order to obtain the one-step superlinear convergence rate, there it is necessary to make two additional assumptions which are too strong: Firstly, the entire sequence generated by the algorithm converges to the K-T point of (1); Secondly, the stepsize is always equal to one after finite iterations.

In this paper, a new SQP algorithm is proposed to solve the problem (1) directly according to the K-T condition (3). This algorithm overcomes the shortcomings just pointed out. It performs a monotone line search along a direction which is obtained by solving only one QP subproblem. With such a particular line search, we point out, because of the intrinsic properties of the minimax problem (1) in contrast with the nonlinear programming, unlike[10] there does not exist any Maratos-like effect and the search direction is not necessary to revise any more. Global convergence is obtained without the above-mentioned strong assumptions in[9]. Under some suitable conditions which are weaker than those in[11] one-step superlinear convergence rate is proven.

Description of algorithm: The following algorithm is proposed for solving problem (1).

Algorithm A
Step 1-Initialization and date: x0 ∈ Rn, B0 ∈ Rnxn a symmetric positive definite matrix.

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Step 2-Computation of the main search direction dk: Compute (zk,dk) by solving the following quadratic problem at xk:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(5)

Let λk be the corresponding multipliers vector. If (zk,dk)= (0, 0), STOP;

Step 3-The line search: Compute tk, the first number t of the sequence {1,½,¼,.....} satisfying

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(6)

Step 4-Updates: Compute a new symmetric definite positive matrix Bk+1. Like Vardi[8] we use Powell's modification[12]

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(7)

where:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(8)

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Let xk+1 = xk + tkdk,k = k+1. Go back to step 2.

Global convergence of algorithm: It is first shown that Algorithm A is well defined. The following general assumptions are true throughout the paper.

A1: fj(j=1~m) are continuously differentiable.

A2: For all x∈Rn, the vectors Image for - An SQP Method for Solving the Nonlinear Minimax Problemare linearly independent.

A3: There exist two constants 0<a≤b, such that a||d||2≤dTBkd≤b||d||2, for all k, for all d∈Rn.

Lemma 1: The solution (zk,dk) of the quadratic subproblem (5) is unique and zk+½(dk)T Bkdk≤0.

Proof: Since (0,0) is a feasible point of (5) and Bk is positive definite, it is clear that the claim holds.

Lemma 2: Let (zk,dk) be the solution of the QP (5) at xk, the following are equivalent:

(i) (zk,dk)=(0,0);
(ii) zk=0;
(iii) dk=0;
(iv) zk+½(dk)T Bkdk=0.

Proof: (i)⇒(ii). It is clear; (ii)⇒(iii) Since zk=0, then dk=0 by Lemma 1; (iii)⇒(iv). Since I(xk)={j|fj(xk)=Mf(xk),j∈I}is nonempty, let j∈I(xk), (5) implies that zk≥0. So, zk+½(dk)T Bkdk≥0, thereby, zk+½(dk)T Bkdk=0; (iv)⇒(i). Since zk+½(dk)T Bkdk=0, it is clear that(0,0) is unique solution of (5) by Lemma 1, that is to say (zk,dk)=(0,0).

Lemma 3: If (zk,dk)=(0,0), then xk is a K-T point of (1). If xk is not a K-T point of (1), then zk<0, dk≠0 and

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Proof: From (5), we have

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(9)

If (zk,dk)=(0,0), then

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(10)

It shows that xk is a K-T point.
If xk is not a K-T point, then (zk,dk)≠(0,0). So,

zk+½(dk)T Bkdk<0, zk<-½(dk)T Bkdk≤0,

and

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

The claim holds.

Lemma 4: Algorithm A is well defined.

Proof: We only need to show that the line search yields a step tk=(½)i for some finite i=i(k). Denote

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

For j∈I\I (xk), since fj(xk)<, Mf(xk) continuity of fj implies that there exists some Image for - An SQP Method for Solving the Nonlinear Minimax Problem, such that ak≤0.

For j∈I (xk), from (5), the fact α<1 implies that there exists some Image for - An SQP Method for Solving the Nonlinear Minimax Problem, such that ak≤0.

Define Image for - An SQP Method for Solving the Nonlinear Minimax Problem. It is clear that the line search condition (6) is satisfied for all t in Image for - An SQP Method for Solving the Nonlinear Minimax Problem.

In the sequel, we'll prove that any accumulation point x* of {xk}generated by the algorithm must be a K-T point of the problem (1). Since there are only finitely many choices for sets I(xk)⊆I, we might as well assume that there exists a subsequence K, such that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(11)

where I* is a constant set.

Lemma 5: If xk⇒x*, Bk⇒B*, k∈K, then dk⇒0, k∈K.

Proof: First of all, in view of (6) and Lemma 3, it is evident that {Mf(xk)} is monotonous decreasing. Hence, considering to {xk}k∈K x*and continuity of Mf(x) it holds that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(12)

Suppose by contradiction that d*≠0, then z*<0. Since

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Let k∈K, k⇒∞, then

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

The corresponding QP problem (5) at x* is

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(13)

It is clear that (z*, d*) is a feasible solution of (13) and B* is positive definite. So, we might as well Image for - An SQP Method for Solving the Nonlinear Minimax Problemlet be the unique solution of (13). Since

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

as k∈K, k⇒∞ we obtain that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

So, it can be seen that Image for - An SQP Method for Solving the Nonlinear Minimax Problemthereby,

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

and for k∈K, k large enough, we have

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(14)

From (14), similar to Lemma 4, we can conclude that the step-size tk obtained by the linear search is bounded away from zero on K, i.e.,

tk≥t* = inf{tk, k∈K}, k∈K

So, from (6), (12), we get

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(15)

It is contradiction, which shows that dk⇒0, k∈K.

According to Lemma 3, Lemma 5 and the fact that (z*, d*) is the solution of (13) , we can get the following convergent theorem.

Theorem 1: The algorithm A either stops at the K-Tpoint {xk} of (1) in finite iteration, or generates an infinite sequence {xk} whose all accumulation points are K-T point of (1).

Rate of convergence: Now we strengthen the regularity assumptions on the functions involved. Assumption A1 are replaced by:

A1: The functions fj(j=1~m) are two times continuously differentiable.
We also make the following additional assumptions:
A4: The sequence generated by the algorithm possesses an accumulation point a (in view of Theorem 1, a K-T point).
A5: Bk⇒B*, k⇒∞
A6: The matrix Image for - An SQP Method for Solving the Nonlinear Minimax Problems nonsingular and it holds that
 
Image for - An SQP Method for Solving the Nonlinear Minimax Problem
where Image for - An SQP Method for Solving the Nonlinear Minimax Problem, is the corresponding K-T multipliers of x* and
Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(16)

A7: The strict complementary slackness are satisfied at the K-T point pair (x*, u*), i.e., Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Theorem 2: The Kuhn-Tucker point x* is isolated.

Proof: This proof is similar to that of[13] but with some technical differences since assumption A2 about the linear independence constraint qualification is different with that of the nonlinear programming and the details are omitted.

Lemma 6: The entire sequence {xk} converges to the K-T point x*, i.e., xk⇒x*, k⇒∞.

Proof: From (5) and (6), we have that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(17)

So, from (12), it holds that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Thereby, according to assumptions A6, A7, Theorem 2 and proposition by Painer and Tits[14] we have xk⇒x*, K⇒∞

Lemma 7: For K⇒∞, we have

dk⇒0,λk⇒u*, Lk≡I(x*), zk=O (||dk||)

where:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(18)

Proof: According to Lemma 5 and xk⇒x*, Bk⇒B*, K⇒∞, it holds that

dk ⇒ 0, zk ⇒ 0, k ⇒ ∞.

Since (x*, u*) is a K-T point pair of (1), we have

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Denote

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

From A2, it is clear that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

and

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(19)

At the same time, denote

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

From (9), we have

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

thereby,

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(20)
Since
Image for - An SQP Method for Solving the Nonlinear Minimax Problem
 

it is obvious , for k large enough, that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

So,

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

In addition, according to dk⇒0, zk⇒0 and (18), it is clear that Lk⊆I(x*).

On the contrary, for j∈I(x*), assumption A7 implies that uj*>0. So, λkj>0 for k large enough, thereby, it holds that

fj(xk)-Mf(xk) +∇fj(xk)Kdk-zk=0,i .e., j∈Lk.

So, Lk≡I(x*).

Thereby, from I(xk)⊆I*≡Lk and A2, it is clear that Zk=O(||dk||).

In order to obtain superlinear convergence, we make the following assumption:

A8: 2fj(x) (j=1~m) are Lipschitz continuous on some ball B(x*, ∈)of radius ε>0 about x*, i.e., there exist some vj>0, such that
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Lemma 8: Under assumptions A1-A8, ||xk+dk-x*||=o(||xk-x*||) if and only if the following condition holds:

A9: The sequence of matrices {Bk}satisfies
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Proof: This proof is similar to the proof of Theorem by Xue[11].

Lemma 9: If assumptions A1-A9 are true, then, for k large enough, tk≡1.

Proof: Firstly, we prove, for k large enough, that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(21)

Let

sj=fj(xk+dk)-Mf (xk)-αzk=fj(xk)-Mf (xk)+∇fj(xk)Tdk-αzk+o(||dk||), j∈I.

Denote

I((xk+dk)={j|fj(xk+dk) = Mf (xk+dk)}

it is obvious, for k large enough, that I(xk+dk)⊆I*.

For j∈I\I(x*), the facts fj(x*)<Mf(x*) and dk⇒0, zk=O(||dk||) imply that sj≤0, ( for k large enough).
For j∈I\I* (xk+dk), from (5), the facts α<1, dk⇒0, zk=O(||dk||) imply that sj≤0, ( for k large enough).
From above analysis, in order to prove that (21) are true, we only prove, for k large enough, that

fj(xk+dk) = Mf(xk+dk)≤Mf(xk)+αzxk,∀j∈I(xk+dk).

i.e.,

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

While, it is always proven that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Thereby, it only needs to prove that

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

i.e.,

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

From (9) and Lk≡I(x*), we have

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
(22)

Denote

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

from (22) and A8, we get

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Since a∈(0,½), it holds that s≤0. So, (21) are true, for k large enough. Thereby, we have, for k large enough, that

Mf(xk+dk)≤Mf(xk)+αzk

i.e., tk≡1 . The claim holds.

From Lemma 8 and Lemma 9, we can get the following theorem:

Theorem 3: Under all above-mentioned assumptions, the algorithm is superlinearly convergent, i.e., ||xk+1-x*||=o(||xk-x*).

Numerical experiments: We carry out numerical experiments based on the algorithm. The results show that the algorithm is effective.

During the numerical experiments α=0.25 and B0=I, the nxn unit matrix. Bk is updated by the BFGS formula (8). In following Tables, IP is the initial point, NIT is the number of iterations of algorithm, AS is the approximate solution.

Problem 1: ([9]). Minimize the maximum of the following three functions:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Problem 2: ([11]). Minimize the maximum of the following six functions:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Problem 3: Rosen-Suzuki Problem ([9,11]). Minimize the maximum of the following functions:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Problem 4: Wong problem 1 ([9,11]). Minimize the maximum of the following functions:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Problem 5: Wong problem 2 ([11]). Minimize the maximum of the following functions:

Image for - An SQP Method for Solving the Nonlinear Minimax Problem
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

In this paper, detailed information about solutions to above mentioned problems is listed as follows:

Table 1: The information of the solution to Problem 1
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Table 2: The information of the solution to Problem 2
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Table 3: The information of the solution to Problem 3
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Table 4: The information of the solution to Problem 4
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

Table 5: The information of the solution to Problem 5
Image for - An SQP Method for Solving the Nonlinear Minimax Problem

ACKNOWLEDGMENT

The author would like to thank the refree for his helpful comments. This work was supported in part by the NNSF (103G1003) and GIETSF (D20350) of China.

REFERENCES
1:  Zang, I., 1980. A smoothing-out technique for min-max optimization. Math. Program., 19: 61-77.
CrossRef  |  

2:  Bandler, J.W. and C. Charalambous, 1974. Nonlinear programming using minimax techniques. JOTA., 13: 607-619.
CrossRef  |  

3:  Charalambous, C. and J.W. Bandler, 1976. Nonlinear minimax optimization as a sequence of least pth optimization with finite values of p. Int. J. Syst. Sci., 7: 377-391.
CrossRef  |  

4:  Osborne, M.R. and G.A.Watson, 1969. An algorithm for minimax approximation in the nonlinear case. Comput. J., 12: 63-68.
CrossRef  |  

5:  Bandler, J.W., T.V. Srinivasan and C. Charalambous, 1972. Minimax optimization of networks by grazor search. IEEE Trans. Micowave Theorey Tech., 20: 596-604.
CrossRef  |  

6:  Barrientos, O., 1998. A global regularizaiton method for solving the finite min-max problem. Comput. Optimization Applic., 11: 277-295.
CrossRef  |  

7:  Polak, E., D.Q. Mayne and J.E. Higgins, 1991. Superlinearly convergent algorithm for min-max problems. J. Optimiz. Theory Appl., 69: 407-439.
CrossRef  |  

8:  Vardi, A., 1992. New minimax algorithm. JOTA., 75: 613-634.

9:  Charalambous, C. and A.R. Conn, 1978. An efficient method to solve the minimax problem directly. SIAM J. Numerical Anal., 15: 162-187.
Direct Link  |  

10:  Zhou, J.L. and A.L. Tits, 1993. Nonmonotone line search for minimax problems. J. Optimiz. Theory Appl., 76: 455-476.
CrossRef  |  

11:  Xue, Y., 2002. The sequential quadratic programming method for solving minimax problem. J. Sys. Sci. Math. Sci., 22: 355-364.

12:  Robinson, S.M., 1972. A quadratically convergent algorithm for general nonlinear programming problems. Math. Program., 3: 145-156.
CrossRef  |  

13:  Painier, E.R. and A.L. Tits, 1987. A Superlinearly Convergent Feasible Method for the Solution of Inequality Constrained Optimization Problems. SIAM J. Control. Opti., 25: 934-950.
CrossRef  |  

14:  Powell, M.J.D., 1978. The Convergence of Variable Metric Methods for Nonlinearly Constrained Optimization Calculations. In: Nonlinear Programming 3, Meyer, R.R. and S.M. Robinson (Eds.). Academic Press, New York.

©  2021 Science Alert. All Rights Reserved