HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2005 | Volume: 5 | Issue: 3 | Page No.: 402-410
DOI: 10.3923/jas.2005.402.410
Implementation of Newmark’s Method for Second Order Initial Value Problems
M. Sikander Hayat Khiyal

Abstract: We discusses the conversion of the one-step Newmark’s method into two-step method and proved that has same order as of one-step method. Next we consider the P-stability and phase properties of the method. We consider the implementation of the method using Newton iteration scheme. Also we discuss the estimation of local error and predictor used and presents the one step of the algorithm and convergence criteria. We discuss the stepsize changing strategy and interpolant used to calculate the back values if required. Finally we present the numerical result by applying the method to solve the stiff nonlinear differential system.

Fulltext PDF Fulltext HTML

How to cite this article
M. Sikander Hayat Khiyal , 2005. Implementation of Newmark’s Method for Second Order Initial Value Problems. Journal of Applied Sciences, 5: 402-410.

Keywords: Newmark�s method, p-stability, Phase lag and initial value problems

INTRODUCTION

Consider the Newmark’s method[1]

(1)

(2)

where, α, β, γ and δ are parameters (γ = 1/2).

We can write the equation (1) in the form

then

(3)

From equation (2), we have

By substituting the value of from equation (3), we get

Now substituting in equation (1), we have

or, equivalently,

where, n1 = f(tn+1,yn+1), n2 = f(tn,yn), n3 = f(tn-1,yn-1) or,

(4)

where:

ORDER CONDITIONS AND P-STABILITY

Order conditions: The local truncation error for the method (4) is given by

(5)

Now expanding the terms and in Taylor series about tn, we have

By substituting the values of and in equation (5), we have

The method (4) is consistent if in equation (5), the coefficients of h0 and h1 are zero such that the method is of the order greater than or equal to one. Since the coefficient of h0 and h1 are zero, thus the method (4) is consistent.

For a second order method, the coefficients of h0, h1, h2 and h3 are zero while the coefficient of h4 is non zero. Thus we have
1-δ-γ = 0, γ-α-β = 0
and
1/12-β/2-(γ-α)/2≠0.
This implies that
δ = 1-γ, α = γ-β and β≠ 1/12.

Thus the necessary and sufficient conditions for second order accurate methods are

(6)

P-stability: Lambert and Watson[2] has defined P-stability.

To analyses the stability properties of the method, we apply the method (4) to the scalar test equation c real. This yield

or, equivalently,

(7)

Equation (7) is a difference equation of the form[3]

where:

The general solution of this difference equation is

where, B1 and B2 are constants and λ1 and λ2 are the zeros of the stability polynomial

(8)

Making the Routh-Herwitz transformation becomes

Thus, the necessary and sufficient conditions for P-stability are

Since τ2 = τ0 we find from (7) that

(9)

which is also a condition for second order accuracy. This is a necessary condition for P-stability. When this condition is satisfied, the transformed stability equation is:

and the necessary and sufficient conditions for P-stability are

(10)

where:

From the conditions for second order accuracy (6), we have

and

Hence the necessary and sufficient conditions for P-stability, (10), becomes

for all real values of ch. Thus

or, equivalently,

(11)

RESULTS

The necessary and sufficient conditions for second order accuracy are

and for P-stability

By using the second order accuracy condition (6) equation (4) can be rewritten in the form

(12)

which is second order accurate and P-stable if β≥1/4. If β=1/4, then this is the Trapezium rule.

The method (12) is Newmark’s second order written as an equivalent two step method. Method (12) is equivalent to the following Newmark’s method (γ = 1/2):

(13)

(14)

which is second order accurate and P-stable if β≥1/4.

PHASE PROPERTIES

Phase analysis: When Newmark’s method (4) is applied to the scalar test equation

(15)

for real c,v and w, we obtain a recurrence relation of the form

(16)

where, the {ri} and {bi} each depend on ch. The general solution of this equation is:

where, B1 and B2 are constants and Q2eiwnh is a particular solution of the recurrence relation, with Q2 satisfying:

(17)

The numerical forced oscillation is in phase with its analytical counterpart if Q2 is also real for all real wh[4,5]. Now we will prove that Q2 is real. We can rewrite (17) in the form:

where:

or

so Q2 is real if and only if DA-CB = 0 for all real values of wh.

Then

Thus DA-CB = 0 for all real values of wh, if r2 = r0, b2 = b0. So a symmetric method (17) for which

(18)

is certainly in phase.

When Newmark’s method (4) is applied to the scalar test equation (15), we obtain

where:

which is of the form (17), for which,

Hence, from equation (18), it is in phase, whenever

which is also a necessary condition for P-stability (eq. 9).

Phase lag: The definition of Phase lag is given by Brusa and Nigro[6])

We can rewrite equation (16), with v = 0 in the form

(19)

where:

(20)

Substituting in (19) gives

where, ch =H. Expanding in Taylor series

we have

(21)

Let Then this becomes

On equating coefficients of Hj, j =2, 3, 4 to zero, we obtain

(22)

(23)

(24)

By substituting the values of Ψ1 and δ1 from (20) in (22) we have

By the order condition (6), α+β+δ = 1. Thus

From equation (23) 2εξ = 0 implying that ξ = 0, since ε = i. From (24).

implying that

Thus

Thus the phase lag is

If β = 1/12 ( in which case the method is not P-stable), then we take

Substituting in (21) yields

On equating the coefficients of Hj, j = 5, 6 to zero, we have

then

Hence the phase lag is

IMPLEMENTATION

Iterative scheme: Consider the P-stable, in phase second order one step version of the Newmark method given by (13) and (14)

for any where,

When this method is applied to a nonlinear differential system

(25)

a nonlinear algebraic system must be solved at each step. This may be solved by using the Newton iteration scheme. Thus on defining

the Newton iteration scheme is

(26)

where:

and is the Jacobian matrix of f with respect to y. Suppose that J is an approximation for the Jacobian matrix, then

Thus the Newton iteration scheme becomes

(27)

where,

The formation of y’n is discussed below after the one-step of algorithm.

Next, we consider the P-stable, in phase second order two-step version of the Newmark method given by (12)

for any where

When this method is applied to a nonlinear differential system (25), a Newton iteration scheme is,

where,

Local error estimation and predictors: To estimate the local error, we will consider an approach based on comparing the predicted and corrected values. As the corrector is a second order Newmark’s method, the order of the predictor must be one to enable us to compute a local error estimate. We use a predictor based on information available on the current step. We can use the predictor

(28)

which is just the Euler’s method.

We can also use the predictor based on the Lagrange interpolant which interpolates to

we obtain

(29)

(this predictor can be used only if two back values are known). For the two step version of the Newmark method, we will use this predictor.

Suppose u(t) is the local solution, y(0)n+1 is the predicted value of yn+1 obtained from (28) or (29) and y(Q)n+1 is the corrected value obtained from the iteration (27). Then, provided the local error in y(0)n+1 is O(h2), we have

where, A is independent of h. Subtracting these equations, we have

and hence the local error in y(0)n+1 is given by

Thus the local error may be estimated from

(31)

One-step of the algorithm: Suppose h is the stepsize to be used for the next step. If we use the one step version of Newmark’s method, we need which are available from the previous computation. For the two-step version of Newmark’s method, we need which are available from previous computations. Also an approximation J to ∂f/∂y is known. Suppose that the triangular factors of I-h2βJ are available and that a predicted value y(0)n+1 has already been computed from (28) or (29). The iteration (27) for the Newmark method can be implemented as shown in the following algorithm.

Algrithm: (Second order P-stable Newmark’s method)

Set p = 1

until convergence (Q iteration, say)

(a) When we use the one-step version of Newmark’s method, we take

Before proceeding to the next step, we set

We wish to calculate without evaluating To do so, we use the (13) to compute and then (14) to compute Thus

and

(b) When we use the two-step version of Newmark’s method, we take

Before proceeding to the next step, we set To calculat without evaluating we use (12). Thus

To test for convergence in the algorithm, we check the condition If this absolute error test is satisfied then the iteration has converged. We take ε = c*Tol, where, Tol is the local error tolerance supplied by the user and c is an appropriate constant. We choose c = 1.0. If and p≥2, we form an estimate of the rate of convergence,

To ensure that convergence is not too slow, we assume that the iteration fails to converge whenever Rate>0.9. If Rate≤0.9 then we perform another step of the iteration and check the above condition. Continue in this way until either the absolute error test is satisfied or the number of iterations exceeds some upper limit, Pmax = 5, (to obtain our results, we choose Pmax = 5). Thus if p>Pmax the iteration is deemed to have failed to converge. If the iteration diverges, some combination of stepsize reduction and reassemble and refactorisation of the iteration matrix (1-β h2J), with or without reevaluation of the Jacobian matrix J, should be used. For the numerical results obtained below, we have tested two strategies.

Strategy 1: If the iteration diverges then reevaluate the Jacobian (if it has not already been evaluated at this point), halve the stepsize and refactorise the iteration matrix and repeat the step. If it diverges again then halve the stepsize and refactorise the iteration matrix and repeat the step. Continue halving the stepsize and refactorising the iteration matrix until the iteration converges.

If the iteration converges but the step has to be rejected because the local error test is not satisfied, then repeat the step with a smaller stepsize. In this case reevaluate the Jacobian (if it has not already been evaluated at this point) and refactorise the iteration matrix. If the local error test is satisfied and the stepsize is changed the again reevaluate the Jacobian (since this is a new point, the Jacobian could not have been evaluated here before) and refactorise the iteration matrix.

Strategy 2: The Jacobian is reevaluated the first time an iteration diverges and also the iteration matrix is refactorised. Then we repeat the integration step. If divergence occurs again then we halve the stepsize and refactorise the iteration matrix, without reevaluating the Jacobian and repeat the step. We continue halving the stepsize and refactorising the iteration matrix until the iteration converges.

If the iterations converges but the step has to be rejected because the local error test is not satisfied, that is Tol, the step is repeated with a smaller stepsize and the iteration matrix is refactorised without reevaluating the Jacobian. If the local error test Tol is satisfied, we take but for the next step if the stepsize is changed then we refactorise the iteration matrix without reevaluating the Jacobian such that we refactorise the iteration matrix whenever the stepsize is changed.

Changing the stepsize: The stepsize to be used for the next (or repeated step) may be calculated from the local error estimate and local error tolerance. A local error estimate for the second order Newmark’s method may be obtained from the relation (31):

where, is the corrected value and is the first order predictor. Then, we have from (30) that

This implies that

If we choose the stepsize, so that the error estimate on the next (or repeated) step is expected to equal the user supplied local error tolerance Tol, we find that

or,

This implies that

In practice, since the above analysis only holds asymptotically as h→0, we take

(32)

where, ρ is a safety factor, whose purpose is to avoid failed step (ρ is often taken to be where p is the order of the predictor, or sometimes 0.8 or 0.9).

To avoid large fluctuations in the stepsize caused by local changes in the error estimate, we put a restriction on the amount by which the stepsize may be increased or decreased. We do not allow the stepsize to decrease by more than a factor ρ1 or increase by more than a factor ρ3.Also to avoid the extra function evaluations, Jacobian evaluation and matrix factorisations involved in changing the stepsize, we do not increase the stepsize at all unless it can be increased by a factor of at least ρ2, where ρ23.

For the numerical results, we have considered ρ = 2-0.5 and ρ1 = 0.2, ρ2 = 2.0 and ρ3 = 5.0.

When we apply the two step version of Newmark’s method, suppose yn+1 is accepted and a new stepsize is predicted for the next step. We need to find the approximation to We can take and evaluate the function is the interpolating polynomial of degree one satisfying the conditions

We find that

(33)

We can take

Where P2,n+1 (t) is the interpolating polynomial of degree two satisfying the conditions and and either (a) or (b) we find that

(a)
(34)

(b)
(35)

and

(36)
(37)

We can take is the interpolating polynomial of degree three satisfying the conditions and

We find that

(38)

and

(39)

We can use any one interpolating polynomial discussed above to find approximation to A similar approach can be adopted when yn+1 has to be rejected (because of error test failure) and the step repeated with a new stepsize h and also when the iteration diverges and the stepsize is halved.

Starting technique and numerical results: Since the two step version of Newmark’s method requires the initial calculation of y0 and y1, given y0 and ý0 (initial conditions), we need to calculate the starting value y1 before applying the two-step version of Newmark’s method and form the error estimate. If the error test is satisfied and y1 is accepted then we proceed to the two step version of Newmark’s method. For the one-step version of Newmark’s method, we use the predictor given by (28) while (29) for the two-step version of Newmark’s method.

On change of stepsize, when we apply the two-step version of Newmark’s method, we have considered different choices of interpolating polynomial discussed above. For the numerical results, we have used the following notation for these choices.

NM1ST: One step version of Newmark method.

NM2P1: Two step version of Newmark method and apply the interpolating polynomial of degree one given by (33) to approximate

NM2P2: Two step version of Newmark method and apply the interpolating polynomial of degree two given by (34) and (35) to approximate

NM2P3: Two step version of Newmark method and apply the cubic Hermite interpolant given by (38) and (39) to approximate

For the numerical results, we have solved the following initial value problems

Exmple 1: The scalar nonlinear problem

The next two problems both use the same differential equation but with different initial conditions. These nonlinear differential equations are

Exmple 2: y1(0) = 1, ý1(0) = 0, y2(0) = 10-4, ý2(0) = 0.
Exmple 3: y1(0) = 1, ý1(0) = 0, y2(0) = 10-8, ý2(0) = 0.

The above problems have been solved for t∈[0,6]. The error at the end point is obtained by comparing the computed solution with the solution obtained by using a fixed step code with a small stepsize for example 1 and for the first equation of examples 2 and 3. For the second equation of example 2 and 3 we have use the exact solution. We denote the error at the end point by MAXERR where for the scalar equation it is |Error at t =6| and for the system it is . For these examples the stepsize is chosen initially to be h = 1.0. The results obtained for the cases where the maximum number of iteration permitted, Pmax, is 5.

Table 1: Methods are compared for example 1 with Pmax = 5, Tol = 10-2 and ε = 10-2

Table 2: Methods are compared for example 1 with Pmax = 5, Tol = 10-4 and ε = 10-4

Table 3: Methods are compared for example 2 with Pmax = 5, Tol = 10-2 and ε = -2

Table 4: Methods are compared for example 2 with Pmax = 5, Tol = 10-4 and ε = -4

Table 5: Methods are compared for example 3 with Pmax = 5, Tol = 10-2 and ε = -2

Table 6: Methods are compared for example 3 with Pmax = 5, Tol = 10-2 and ε = -2

Table 7: Methods are compared for example 1 with Pmax = 5, Tol = 10-2 and ε = -2

Table 8: Methods are compared for example 2 with Pmax = 5, Tol = 10-2 and ε = -2

Table 9: Methods are compared for example 3 with Pmax = 5, Tol = 10-2 and ε = -2

In Tables, we give the results of solving these problems with Tol = 10-2 and 10-4.

In each case, we present the following statistics:

Number of evaluation of the differential equation right hand side f, FCN;
Number of evaluation of the Jacobian ∂f/∂y, JC;
Number of iterations overall, NIT;
Number of iterations in steps where iteration converged, NST;
Number of steps overall, STP;
Number of successful steps to complete the integration, SST;
Number of steps where the stepsize is changes, CS;
Number of failed steps, FS;
Number of LU factorisation of the iteration matrix, FT;
Number of function evaluations by using the interpolating polynomial of degree one given by (30) to find back values on change of stepsize, FP1.

The results are presented in Table1-6 for strategy 1 and in Table 7-9 for strategy 2.

REFERENCES

  • Newmark, N.M., 1959. A method of computation for structural dynamics. J. Eng. Mech. Div. ASCE, 85: 67-94.


  • Lambert, J.D. and I.A. Watson, 1976. Symmetric multistep methods for periodic initial value problems. IMA J. Applied Math., 18: 189-202.
    CrossRef    Direct Link    


  • Coleman, J.P., 1989. Numerical method for y = f(x, y) via rational approximation for the cosine. IMA J. Numer. Anal., 9: 145-165.
    CrossRef    Direct Link    


  • Gladwell, I. and R.M. Thomas, 1983. Damping and phase analysis for some methods for solving second order ordinary differential equations. Int. J. Numer. Meth. Eng., 19: 495-503.
    CrossRef    


  • Khiyal, M.S.H., 1991. Efficient algorithms based on direct hybrid methods for second order initial value problems. Ph.D. Thesis, UMIST, Manchester, UK.


  • Brusa, L. and L. Nigro, 1980. A one step method for direct integration of structural dynamics equations. Int. J. Numer. Methods Eng., 15: 685-699.
    Direct Link    

  • © Science Alert. All Rights Reserved