HOME JOURNALS CONTACT

Journal of Software Engineering

Year: 2014 | Volume: 8 | Issue: 3 | Page No.: 169-183
DOI: 10.3923/jse.2014.169.183
Particle Swarm Optimization-based Augmented Lagrangian Algorithm for Constrained Optimization Problems
Xuesong He, Changyu Liu, Hongkui Dong, Jiaoteng Pan and Qiurong Yan

Abstract: This study proposes a Particle Swarm Optimization-based Augmented Lagrangian (PSOAL) algorithm which combines particle swarm optimization technique with a non-stationary penalty function method to solve constrained optimization and engineering design problems. A set of novel strategies are developed based on the particle feasibility to adaptively update critical parameters and a point-based local search procedure is embedded within the algorithm framework to improve the convergence property of the proposed algorithm. The 13 well-known constrained benchmark problems are solved and the obtained results are compared with other state-of-the-art algorithms. The results demonstrate that, the proposed PSOAL achieves higher accuracy compared to other considered algorithms. In addition, as an added benefit, PSOAL can also easily find out the Lagrange multipliers, which have great value for sensitivity analysis in practice but are almost not considered in most intelligent algorithms designed for constrained problems.

Fulltext PDF Fulltext HTML

How to cite this article
Xuesong He, Changyu Liu, Hongkui Dong, Jiaoteng Pan and Qiurong Yan, 2014. Particle Swarm Optimization-based Augmented Lagrangian Algorithm for Constrained Optimization Problems. Journal of Software Engineering, 8: 169-183.

Keywords: Augmented Lagrangian method, Particle swarm optimization, pressure vessel design, Lagrange multiplier, robust PID tuning and constrained optimization problems

INTRODUCTION

Most real-world engineering problems usually involve various constraints and therefore they are classified as Constrained Optimization Problems (COPs). Formally, a general COP can be state as Eq. 1:

(1)

where, x = (x1, x2,...,xD)T is the solution vector; f(x) is the objective function hj(x) and gk(x) are the j-th equality constraint and the k-th inequality constraint respectively; J and K are the numbers of equality and inequality constraints respectively; Ω is the search space xdl and xdu, respectively are the lower and upper bounds of the xd.

Among the many algorithms developed for solving engineering problems, Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995) has gained widespread use as one of the simplest algorithms. As a stochastic algorithm, PSO can deal with non-continuous and non-differentiable problems only by the value of the objective function. Meanwhile, the global search capability allows PSO to search optimal solution regardless of initial value. However, COPs cannot be directly solved by PSO because the optimal solution must be in the feasible region defined by the constraints. Noting that the feasible region usually occupies a very small part of the search space and the offspring of the feasible swarm are probably not feasible any more, PSO therefore cannot extract useful information about the feasibility easily from the swarm. This leads to an ineffective search for the optimal feasible solution.

In order to overcome the above defect, many improved PSO algorithms have been developed to solve COPs. Pulido and Coello (2004) proposed a Constraint-Handling Mechanism for Particle Swarm Optimization (CHMPSO) which uses a simple criterion based on closeness of a particle to the feasible region for selecting a leader. By combining PSO with the feasibility and dominance rules, Zavala et al. (2005) introduced a Particle Evolutionary Swarm Optimization (PESO) and applied them to solve engineering design. Self-Adaptive Velocity Particle Swarm Optimization (SAVPSO) proposed by Lu and Chen (2008) used the impact of constraints to improve the optimization ability of particles for solving constraints. In recent publications by Sun et al. (2011), an Improved Vector Particle Swarm Optimization (IVPSO), based on the constraint-preserving method, was proposed.

Although, experimental results illustrated the attractiveness of these improved PSO algorithms for handling several types of COPs, they could not reveal further information about constraints except optimal feasible solution. In this sense, Augmented Lagrangian (AL) method (Powell, 1967; Fletcher, 1975) may show more potential. As a non-stationary penalty function method, AL transforms a COP into a series of unconstrained penalty functions by cleverly designed AL function. Besides searching optimal feasible solution, the Lagrange multiplier associated with each constraint is also able to be found without any additional effort. It provides valuable information about the influence and sensitivity of each constraint on optimal feasible solution (Deb and Srivastava, 2012).

Based on these advantages, a Particle Swarm Optimization-based Augmented Lagrangian (PSOAL) algorithm is proposed in this study. The framework of classical AL is rearranged to obtain an easily accessible structure for PSO. As a result, PSO can be employed to solve unconstrained penalty functions in a seamless manner and in turn, gives the proposed PSOAL algorithm the capability to handle real-world engineering problems. A set of novel strategies are developed based on the particle feasibility to adaptively update critical parameters and a point-based local search procedure is embedded within the algorithm framework to improve the convergence property of PSOAL. The performances of PSOAL are compared, on a standard benchmark suite for COPs, against four improved PSO algorithms mentioned above. In addition, the Lagrange multipliers, as a by-product of the PSOAL, are computed and reported for a test example and two engineering design problems.

PROPOSED PSOAL ALGORITHM

As shown in Fig. 1, the framework of PSOAL consists basically of two nested iterations: An inner iteration in which a series of unconstrained penalty functions are optimized by PSO and an outer iteration where the AL parameters are updated based on optimal solution of the penalty function.

Fig. 1: Framework of PSOAL

AL parameters update in outer iteration: We employ the Powell-Hestenes-Rockafellar (Powell, 1967; Hestenes, 1969; Rockafellar, 1973) AL function to transform the problem (1) into an unconstrained penalty function as follows in Eq. 2:

(2)

where, we denote (g (x))+ = max {0, g (x)}; ρ∈+ is the penalty factor; vector λ∈J and μ∈K+ are Lagrangain multipliers.

In order to approximate the optimal feasible solution x*, we have to minimize a series of penalty functions in which the m-th penalty function is defined as in Eq. 3:

(3)

where, ρm, λm and μm are continuously updated based on the optimal solution xm,* of the m-th penalty function.

We use the first-order Lagrangain multiplier estimation to update Lagrangain multipliers (Eq. 4-5):

(4)

(5)

Considering that the penalty factor ρ maintains the balance between the objective function f(x) and the constraint violation represented by the second term of Eq. 2, one must be more cautious with the update for ρ. From Eq. 2, we can easily observe that if ρ decreases the f(x) will receive a relatively larger weight, which is conducive to find optimal solution and if ρ increases the constraint violation will be given a larger weight, which is conducive to find feasible solution. These means that the optimal feasible solution x* can not be obtained unless a fine balance is achieved.

For this reason, we develop a monotonic strategy for ρ as followings (Eq. 6):

(6)

where, r∈(0, 1) is the feasibility improvement rate, γ∈(1,+∞) is the penalty increase rate, εfea is the desired feasibility precision, Vm denotes the constraint violation level defined by (Eq. 7):

(7)

Normally, objective function f(x) decreases with decreasing Lρ(x, λ, μ) in the feasible region. However, if Lρ(x, λ, μ) decreases but f(x) increases, it implies that an excessively large ρ destroys the balance and the behavior of Lρ(x, λ, μ) is completely dominated by constraint violation.

In order to avoid the above case, a reset strategy is also developed in (Eq. 8) aiming to restore a proper balance:

(8)

where, by definition (Eq. 9):

(9)

Figure 2 illustrates the above update strategy under which the ρ monotonically increases until the reset strategy is activated.

Penalty function optimization in inner iteration: In inner iteration, the penalty function must be solved accurately. In view of the simplicity of the algorithm, PSO is employed based on the fact that only two key lines of computer codes and three controlling parameters are needed.

We use vector xin and vin to, respectively denote the position and velocity of a particle i at inner iteration n, then they are updated as (Eq. 10-11):

(10)

(11)

where, r1 and r2 are random numbers uniformly distributed in (0, 1); w is the inertia weight; c1 and c2 are cognitive and social parameters, respectively; xibest,n is the best previously obtained position of the particle i after n iterations defined by Eq. 12:

Fig. 2: Update strategy for ρ

(12)

and is the global best position in the entire swarm up to iteration n so far defined by Eq. 13:

(13)

Realization of PSOAL: From the above discussions, the PSOAL algorithm is described as follows:

Step 1:
Initialize AL parameters. ∀λ1εJ, ∀μ1εK+, ∀x0iεΩ, i = 1, 2,…, P. Set ρ1 based on Eq. 8 using
. Set m = 1
Step 2: Beginning inner iteration. Built unconstrained penalty function Lρm (x, λm, μm). Set n = 1
Step 2.1: Update xin and vin based on Eq. 10 and 11
Step 2.2: Compute xibest, n and based on Eq. 12 and 13
Step 2.3: If n≤N, set n = n+1 and goto step 2.1; else goto step 3
Step 3: Ending inner iteration. Set
Step 4: If Vm≤εfea employ a local search procedure and update xm,* = x*local; else goto step 5
Step 5: Update λm+1 and μm+1 based on Eq. 4 and 5 using xm,*
Update ρm+1 based on Eq. 6 and 8 using xm,*
Step 6: Check the stopping criterion. If the criterion is satisfied, the algorithm is stopped and output x* = xm,*; else set m = m+1 and goto step 2

For the purpose of improving the convergence property, a local search procedure with initial point xm,* is embedded in step 4. Noting that this search procedure operates under condition Vm≤εfea, the x* is most likely to be obtained from the neighborhood of xm,* which is now regarded as a potential near-optimal feasible solution. The Hooke-Jeeves method (Hooke and Jeeves, 1961) is employed in this study.

The stopping criterion is comprised of the objective function convergence and the constraints satisfaction. Therefore the following criterion (Eq. 14) declares the success of solving COP (1):

(14)

NUMERICAL EXPERIMENTS

Benchmark test problems: To investigate the performance of the proposed PSOAL algorithm, 13 well-known benchmark problems from CEC2006 (Liang et al., 2006) are used. Their main characteristics are summarized in Table 1 where D is the problem dimension, Type is the type of objective function, Ratio is the feasibility ratio which is calculated as a percentage of feasible solutions among 1000000 random solutions (Michalewicz and Schoenauer, 1996); LI, NI, LE and NE are the number of linear inequality constraints, nonlinear inequality constraints, linear equality constraints and nonlinear equality constraints, respectively; α is the number of active constraints.

Parameter setting: In all experiments, the same parameter settings are used for PSOAL. These parameters are listed in Table 2. For each benchmark problem, 30 independent runs are carried out. All experiments are performed in MATLAB.

Table 1: Characteristics of 13 benchmark test problems

Table 2: Parameter settings for PSOAL

Table 3: Results of benchmark test problems

Results and analysis: The experimental results obtained by using PSOAL with the above settings are summarized in Table 3 where Opt is the best known optimal solution reported by Liang et al. (2006), Std is the standard deviation out of 30 independent runs; NFE is the mean number of function evaluations, Suc is the number of successes in solving problems. The successful run means the result is within 0.1% of the Opt and is feasible.

As described in Table 3, the PSOAL was able to accurately find the Opt in all problems except for g02 where it found solutions very close to the Opt and better solutions were found for g04, g06 and g10. From the results it can also be noted that the Std over all runs are quite small.

In terms of the Suc number, all problems were successfully solved for 30 runs except for g02. The main reason for this failure is based on the fact that PSO suffers from the curse of dimensionality for high dimensionality problems (Wang, 2013).

Details for solving g13: We choose g13 as an example to illustrate detailed behaviors of the PSOAL. g13 has five variables and three nonlinear equality constraints:

(15)

Table 4 shows the best solution obtained by using PSOAL and also provides the best known optimal solution for comparison. It is clear that PSOAL solved g13 very well. In addition, as a feature of PSOAL, the Lagrangain multipliers were found at λ = (0.401652, 0.037955, 0.005217), which revealed that the constraints h1, h2 and h3 are activated and, moreover, the optimal solution is more sensitive to h1 due to the largest value of Lagrangain multiplier.

The behaviors of the PSOIAL are illustrated in Fig. 3. Unlike the algorithms designed for unconstrained optimization problems, the f(x) dramatically increased in the first few iterations because the solutions were striving to move towards the feasible region. Then, along with the improvement of the solutions feasibility, the f(x) gradually converged to the optimal value.

Table 4: Optimal results of g13
NA: No information available

Fig. 3(a-b): Behaviors of PSOIAL for g13, (a) Objective function and constraints and (b) Lagrange multipliers and penalty factor

Figure 3b shows the typical variations of Lagrangain multipliers and penalty factor. After the dramatic initial changes, the λ converged rapidly to appropriate values. As we expect, the ρ monotonically increased during the initial outer iterations until the reset strategy was activated at outer iteration 8, where ρ has been large enough to destroy the balance and thus a small value may relieve this pressure because of giving a larger weight on the f(x). It is clear that the variations of ρ match the proposed update strategy very well. Incidentally, in most cases, the solutions are found before the reset strategy is activated.

Fig. 4: Effect of r on PSOAL performance for g07

Parametric study: Except for the parameters of PSO, the feasibility improvement rate r and the penalty increase rate γ are the only two important parameters that may influence the algorithm performance in the PSOAL. Because there has been a lot of discussion on the choice of PSO parameters (Poli et al., 2007), we perform a parametric study to examine the effect of r and γ.

Feasibility improvement rate r: To investigate the effect of r on the performance of the proposed algorithm, we test the PSOAL over different r values for g07. These results are presented in Fig. 4. It is observed that on increasing the r value from 0.1-1, more number of function evaluations are required. This is mainly because a larger r value reduces the update frequency of the penalty factor and as a result, more number of out iterations are needed to achieve a fine balance between the objective function and the constraint violation. Even so, the r value is not particularly sensitive to optimal solution convergence. A proper r value may be within the broad range of 0.1-0.7.

Penalty increase rate γ: Next, we perform a parametric study of γ on the same problem g07. The parameter γ is varied from 2-20 at 2 intervals. Figure 5 presents the results and it can be observed that the effect of γ is completely opposite to r’s. This can be easily understood by the idea that a smaller γ value slows down the increase of penalty factor. Based on the stated observations, the PSOAL shows excellent robustness against the variations of γ and any value greater than 6 is considered acceptable.

Comparison to the state-of-the-art algorithms: To further evaluate the performance of the proposed algorithm, PSOAL is compared with four other PSO-based algorithms, including CHMPSO (Pulido and Coello, 2004), PESO (Zavala et al., 2005), SAVPSO (Lu and Chen, 2008) and IVPSO (Sun et al., 2011). A summary of their parameter settings is presented in Table 5.

Fig. 5: Effect of γ on PSOAL performance for g07

Table 5: Parameter settings for PSOAL, CHMPSO, PESO, SAVPSO and IVPSO
rand is a random value within the range [0, 1]

Based on the experimental results reported in the relevant literature, a comparison summary is provided in Table 6 in terms of best, mean, worst and Std. We can observe that PSOAL performed better (or same) than all other four algorithms for nine problems (g01, g02, g04, g06, g07, g08, g10, g12 and g13). For g09 and g11, the disparities of the results between the PSOAL and other algorithms are very small. The IVPSO seemed to produce better results for g03 and g05, but the solutions found by PSOAL can better meet the constraints of these two problems (the constraints are activated at 10E-9 level). Most impressively, the PSOAL is very good at handling g7, g10 and g13 which are difficult for other algorithms to solve in terms of Std.

The overall results suggest that the PSOAL has excellent capability in solving various COPs and can offer optimal feasible solution under severe constraints. Although, there are some shortcomings for high dimensionality problems, thanks to the nested structure of PSOAL, it is convenient to improve the PSO’s search capabilities for enhancing the overall performance of the algorithm.

ENGINEERING APPLICATION

Robust PID controller design problem: Consider a standard PID feedback control system shown in Fig. 6 where r(t) is the reference signal, y(t) is the controlled output, d(t) is the measurement noise, W(s) is the weighting function, G(s) is the controlled plant, C(s, k) is an ideal PID controller described by following transfer function Eq. 16:

(16)

Table 6: Comparison of results obtained by different PSO-based algorithms

Fig. 6: Block diagram of PID feedback control system

The objective is to find a set of optimal control parameters k to achieve internal stability and noise rejection. The problem can be stated as follows Eq. 17:

(17)

where, λi is the i-th pole of the closed loop system.

Kim et al. (2008) solved this problem based on following system Eq. 18:

(18)

Table 7 lists the comparisons of the best results and it is clear that the results obtained by using PSOAL are better than Opt reported by Kim et al. (2008). In addition, from the observation of the Lagrangain multipliers, there are no constraints to be activated. Figure 7 shows the behaviors of PSOAL for robust PID design.

Pressure vessel design problem: In the pressure vessel design problem, a cylindrical vessel is capped at both ends by hemispherical heads as shown in Fig. 8. The objective is to minimize the total cost, including the cost of material, forming and welding.

Table 7: Optimal results for robust PID controller design

Fig. 7(a-b): Behaviors of PSOAL for robust PID design, (a) Objective function and constraints and (b) Lagrange multipliers and penalty factor

Fig. 8: Schematic diagram of the pressure vessel

There are four design variables: Ts (thickness of the shell, x1), Th (thickness of the head, x2), R (inner radius, x3) and L (length of cylindrical section of the vessel, x4). Ts and Th are integer multiples of 0.0625 inch and R and L are continuous. The problem formulation can be stated as Eq. 19:

(19)

Table 8: Optimal results for pressure vessel design

Fig. 9(a-b): Behaviors of PSOIAL for pressure vessel design (g3* and g4* are normalized constraints), (a) Objective function and constraints and (b) Lagrange multipliers and penalty factor

Table 8 presents the best solutions obtained by using PSOAL and compares these results with Opt as reported by Hsieh (2014). It can be seen from Table 8 that the PSOAL results are almost equal to the Opt. However, the real advantage of the PSOAL is that the Lagrangain multipliers reveal more valuable information, from which we know that there are two constraints (g1 and g3) to be activated and the optimal solution is more sensitive to g1. Figure 9 depicts the behaviors of PSOIAL for pressure vessel design.

CONCLUSION

In this study, a particle swarm optimization-based augmented Lagrangian algorithm is proposed to solve constrained optimization and engineering design problems. The proposed PSOAL algorithm uses two nested iterations to optimize unconstrained penalty functions and update algorithm parameters, respectively. A monotonic strategy and a reset strategy are developed to update penalty factor with fine balance between the objective function and the constraint violation. In addition a point-based local search procedure is embedded to improve the convergence property.

The performances of the PSOAL are tested by 13 well-known benchmark problems and compared with four other PSO-based algorithms. The obtained results show that the PSOAL outperforms other considered algorithms for most of the benchmark problems. Most importantly, the Lagrangain multipliers are also obtained as a by-product of the PSOAL and thus provide more information about the influence and sensitivity of the constraints on the optimal feasible solution.

Two engineering design problems demonstrate that the PSOAL may be used for solving the real world optimization problems. Further research is recommended to explore the application in governor tuning for hydogenerators.

REFERENCES

  • Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the International Conference on Neural Networks, Volume 4, November 27-December 1, 1995, Perth, WA., USA., pp: 1942-1948.


  • Pulido, G.T. and C.A.C. Coello, 2004. A constraint-handling mechanism for particle swarm optimization. Proceedings of the Congress on Evolutionary Computation, Volume 2, June 19-23, 2004, Portland, OR., USA., pp: 1396-1403.


  • Zavala, A.E.M., A.H. Aguirre and E.R.V. Diharce, 2005. Particle Evolutionary Swarm Optimization algorithm (PESO). Proceedings of the 6th Mexican International Conference on Computer Science, September 26-30, 2005, Puebla, Mexico, pp: 282-289.


  • Lu, H. and W. Chen, 2008. Self-adaptive velocity particle swarm optimization for solving constrained optimization problems. J. Global Optimiz., 41: 427-445.
    CrossRef    Direct Link    


  • Sun, C., J.C. Zeng and J.S. Pan, 2011. An improved vector particle swarm optimization for constrained optimization problems. Inform. Sci., 181: 1153-1163.
    CrossRef    Direct Link    


  • Powell, M.J.D., 1967. A Method for Nonlinear Constraints in Minimization Problems. Academic Press, New York


  • Fletcher, R., 1975. An ideal penalty function for constrained optimization. IMA J. Applied Math., 15: 319-342.
    CrossRef    Direct Link    


  • Deb, K. and S. Srivastava, 2012. A genetic algorithm based augmented Lagrangian method for constrained optimization. Comput. Optim. Appl., 53: 869-902.
    CrossRef    Direct Link    


  • Hestenes, M.R., 1969. Multiplier and gradient methods. J. Optim. Theory Appl., 4: 303-320.
    CrossRef    Direct Link    


  • Rockafellar, R.T., 1973. A dual approach to solving nonlinear programming problems by unconstrained optimization. Math. Programm., 5: 354-373.
    CrossRef    Direct Link    


  • Hooke, R. and T.A. Jeeves, 1961. Direct search solution of numerical and statistical problems. J. ACM., 8: 212-229.
    CrossRef    Direct Link    


  • Liang, J.J., T.P. Runarsson, E. Mezura-Montes, M. Clerc, N. Suganthan, C.A.C. Coello and K. Deb, 2006. Problem definitions and evaluation criteria for the CEC 2006 special session on constrained real-parameter optimization. J. Applied Mech., 41: 2-24.
    Direct Link    


  • Michalewicz, Z. and M. Schoenauer, 1996. Evolutionary algorithms for constrained parameter optimization problems. Evol. Comput., 4: 1-32.
    CrossRef    Direct Link    


  • Wang, L., 2013. An improved cooperative particle swarm optimizer. Telecommun. Syst., 53: 147-154.
    CrossRef    Direct Link    


  • Poli, R., J. Kennedy and T. Blackwell, 2007. Particle swarm optimization: An overview. Swarm Intell., 1: 33-57.
    CrossRef    Direct Link    


  • Kim, T., I. Maruta and T. Sugie, 2008. Robust PID controller tuning based on the constrained particle swarm optimization. Automatica, 44: 1104-1110.
    CrossRef    Direct Link    


  • Hsieh, T.J., 2014. A bacterial gene recombination algorithm for solving constrained optimization problems. Applied Math. Comput., 231: 187-204.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved