INTRODUCTION
Consider the Newmark’s method^{[1]}
where, α, β, γ and δ are parameters (γ = 1/2).
We can write the equation (1) in the form
then
From equation (2), we have
By substituting the value of from equation (3), we get
Now substituting in equation (1), we have
or, equivalently,
where, n_{1} = f(t_{n+1},y_{n+1}), n_{2} = f(t_{n},y_{n}), n_{3} = f(t_{n1},y_{n1}) or,
where:
ORDER CONDITIONS AND PSTABILITY
Order conditions: The local truncation error for the method (4) is given by
Now expanding the terms and
in
Taylor series about t_{n}, we have
By substituting the values of and
in equation (5), we have
The method (4) is consistent if in equation (5), the coefficients
of h^{0} and h^{1} are zero such that the method is of the order
greater than or equal to one. Since the coefficient of h^{0} and h^{1}
are zero, thus the method (4) is consistent.
For a second order method, the coefficients of h^{0}, h^{1},
h^{2} and h^{3} are zero while the coefficient of h^{4}
is non zero. Thus we have
1δγ = 0, γαβ = 0
and
1/12β/2(γα)/2≠0.
This implies that
δ = 1γ, α = γβ and β≠ 1/12.
Thus the necessary and sufficient conditions for second order accurate methods are
Pstability: Lambert and Watson^{[2]} has defined Pstability.
To analyses the stability properties of the method, we apply the method (4)
to the scalar test equation
c real. This yield
or, equivalently,
Equation (7) is a difference equation of the form^{[3]}
where:
The general solution of this difference equation is
where, B_{1} and B_{2} are constants and λ_{1} and λ_{2} are the zeros of the stability polynomial
Making the RouthHerwitz transformation
becomes
Thus, the necessary and sufficient conditions for Pstability are
Since τ_{2} = τ_{0 }we find from (7) that
which is also a condition for second order accuracy. This is a necessary condition
for Pstability. When this condition is satisfied, the transformed stability
equation is:
and the necessary and sufficient conditions for Pstability are
where:
From the conditions for second order accuracy (6), we have
and
Hence the necessary and sufficient conditions for Pstability, (10), becomes
for all real values of ch. Thus
or, equivalently,
RESULTS
The necessary and sufficient conditions for second order accuracy are
and for Pstability
By using the second order accuracy condition (6) equation (4)
can be rewritten in the form
which is second order accurate and Pstable if β≥1/4. If β=1/4, then this is the Trapezium rule.
The method (12) is Newmark’s second order written as an equivalent two step method. Method (12) is equivalent to the following Newmark’s method (γ = 1/2):
which is second order accurate and Pstable if β≥1/4.
PHASE PROPERTIES
Phase analysis: When Newmark’s method (4) is applied to the scalar test equation
for real c,v and w, we obtain a recurrence relation of the form
where, the {r_{i}} and {b_{i}} each depend on ch. The general
solution of this equation is:
where, B_{1} and B_{2} are constants and Q_{2}e^{iwnh} is a particular solution of the recurrence relation, with Q_{2} satisfying:
The numerical forced oscillation is in phase with its analytical counterpart
if Q_{2} is also real for all real wh^{[4,5]}. Now we will prove
that Q_{2} is real. We can rewrite (17) in the form:
where:
or
so Q_{2} is real if and only if DACB = 0 for all real values of wh.
Then
Thus DACB = 0 for all real values of wh, if r_{2} = r_{0}, b_{2} = b_{0}. So a symmetric method (17) for which
is certainly in phase.
When Newmark’s method (4) is applied to the scalar test equation
(15), we obtain
where:
which is of the form (17), for which,
Hence, from equation (18), it is in phase, whenever
which is also a necessary condition for Pstability (eq. 9).
Phase lag: The definition of Phase lag is given by Brusa and Nigro^{[6]})
We can rewrite equation (16), with v = 0 in the form
where:
Substituting
in (19) gives
where, ch =H. Expanding in Taylor series
we have
Let
Then this becomes
On equating coefficients of H^{j}, j =2, 3, 4 to zero, we obtain
By substituting the values of Ψ_{1} and δ_{1} from
(20) in (22) we have
By the order condition (6), α+β+δ = 1. Thus
From equation (23) 2εξ = 0 implying that ξ
= 0, since ε = i. From (24).
implying that
Thus
Thus the phase lag is
If β = 1/12 ( in which case the method is not Pstable), then we take
Substituting in (21) yields
On equating the coefficients of H^{j}, j = 5, 6 to zero, we have
then
Hence the phase lag is
IMPLEMENTATION
Iterative scheme: Consider the Pstable, in phase second order one step
version of the Newmark method given by (13) and (14)
for any
where,
When this method is applied to a nonlinear differential system
a nonlinear algebraic system must be solved at each step. This may be solved
by using the Newton iteration scheme. Thus on defining
the Newton iteration scheme is
where:
and
is the Jacobian matrix of f with respect to y. Suppose that J is an approximation
for the Jacobian matrix, then
Thus the Newton iteration scheme becomes
where,
The formation of y’_{n} is discussed below after the onestep of algorithm.
Next, we consider the Pstable, in phase second order twostep version of the
Newmark method given by (12)
for any
where
When this method is applied to a nonlinear differential system (25), a Newton
iteration scheme is,
where,
Local error estimation and predictors: To estimate the local error, we will consider an approach based on comparing the predicted and corrected values. As the corrector is a second order Newmark’s method, the order of the predictor must be one to enable us to compute a local error estimate. We use a predictor based on information available on the current step. We can use the predictor
which is just the Euler’s method.
We can also use the predictor based on the Lagrange interpolant which interpolates
to
we obtain
(this predictor can be used only if two back values are known). For the two step version of the Newmark method, we will use this predictor.
Suppose u(t) is the local solution, y^{(0)}_{n+1} is the predicted
value of y_{n+1} obtained from (28) or (29) and y^{(Q)}_{n+1}
is the corrected value obtained from the iteration (27). Then, provided the
local error in y^{(0)}_{n+1} is O(h^{2}), we have
where, A is independent of h. Subtracting these equations, we have
and hence the local error in y^{(0)}_{n+1} is given by
Thus the local error may be estimated from
Onestep of the algorithm: Suppose h is the stepsize to be used for
the next step. If we use the one step version of Newmark’s method, we need
which are available from the previous computation. For the twostep version
of Newmark’s method, we need which are available from previous computations.
Also an approximation J to ∂f/∂y is known. Suppose that the triangular
factors of Ih^{2}βJ are available and that a predicted value y^{(0)}_{n+1}
has already been computed from (28) or (29). The iteration (27) for the Newmark
method can be implemented as shown in the following algorithm.
Algrithm: (Second order Pstable Newmark’s method)
until convergence (Q iteration, say)
(a) When we use the onestep version of Newmark’s method, we take
Before proceeding to the next step, we set
We wish to calculate without
evaluating
To do so, we use the (13) to compute
and then (14) to compute
Thus
and
(b) When we use the twostep version of Newmark’s method, we take
Before proceeding to the next step, we set
To calculat
without evaluating we
use (12). Thus
To test for convergence in the algorithm, we check the condition
If this absolute error test is satisfied then the iteration has converged. We
take ε = c*Tol, where, Tol is the local error tolerance supplied by the
user and c is an appropriate constant. We choose c = 1.0. If
and p≥2, we form an estimate of the rate of convergence,
To ensure that convergence is not too slow, we assume that the iteration fails to converge whenever Rate>0.9. If Rate≤0.9 then we perform another step of the iteration and check the above condition. Continue in this way until either the absolute error test is satisfied or the number of iterations exceeds some upper limit, Pmax = 5, (to obtain our results, we choose Pmax = 5). Thus if p>Pmax the iteration is deemed to have failed to converge. If the iteration diverges, some combination of stepsize reduction and reassemble and refactorisation of the iteration matrix (1β h^{2}J), with or without reevaluation of the Jacobian matrix J, should be used. For the numerical results obtained below, we have tested two strategies.
Strategy 1: If the iteration diverges then reevaluate the Jacobian (if it has not already been evaluated at this point), halve the stepsize and refactorise the iteration matrix and repeat the step. If it diverges again then halve the stepsize and refactorise the iteration matrix and repeat the step. Continue halving the stepsize and refactorising the iteration matrix until the iteration converges.
If the iteration converges but the step has to be rejected because the local error test is not satisfied, then repeat the step with a smaller stepsize. In this case reevaluate the Jacobian (if it has not already been evaluated at this point) and refactorise the iteration matrix. If the local error test is satisfied and the stepsize is changed the again reevaluate the Jacobian (since this is a new point, the Jacobian could not have been evaluated here before) and refactorise the iteration matrix.
Strategy 2: The Jacobian is reevaluated the first time an iteration diverges and also the iteration matrix is refactorised. Then we repeat the integration step. If divergence occurs again then we halve the stepsize and refactorise the iteration matrix, without reevaluating the Jacobian and repeat the step. We continue halving the stepsize and refactorising the iteration matrix until the iteration converges.
If the iterations converges but the step has to be rejected because the local
error test is not satisfied, that is Tol,
the step is repeated with a smaller stepsize and the iteration matrix is refactorised
without reevaluating the Jacobian. If the local error test
Tol is satisfied, we take but for the next step if the stepsize is changed then
we refactorise the iteration matrix without reevaluating the Jacobian such that
we refactorise the iteration matrix whenever the stepsize is changed.
Changing the stepsize: The stepsize to be used for the next (or repeated
step) may be calculated from the local error estimate and local error tolerance.
A local error estimate for the second order Newmark’s method may be obtained
from the relation (31):
where, is
the corrected value and is
the first order predictor. Then, we have from (30) that
This implies that
If we choose the stepsize, so
that the error estimate on the next (or repeated) step is expected to equal
the user supplied local error tolerance Tol, we find that
or,
This implies that
In practice, since the above analysis only holds asymptotically as h→0, we take
where, ρ is a safety factor, whose purpose is to avoid failed step (ρ
is often taken to be
where p is the order of the predictor, or sometimes 0.8 or 0.9).
To avoid large fluctuations in the stepsize caused by local changes in the
error estimate, we put a restriction on the amount by which the stepsize may
be increased or decreased. We do not allow the stepsize to decrease by more
than a factor ρ_{1} or increase by more than a factor ρ_{3}.Also
to avoid the extra function evaluations, Jacobian evaluation and matrix factorisations
involved in changing the stepsize, we do not increase the stepsize at all unless
it can be increased by a factor of at least ρ_{2}, where ρ_{2}<ρ_{3}.
For the numerical results, we have considered ρ = 2^{0.5} and
ρ_{1} = 0.2, ρ_{2} = 2.0 and ρ_{3} =
5.0.
When we apply the two step version of Newmark’s method, suppose y_{n+1}
is accepted and a new stepsize
is predicted for the next step. We need to find the approximation to We
can take
and evaluate the function
is the interpolating polynomial of degree one satisfying the conditions
We find that
We can take
Where P_{2,n+1 }(t) is the interpolating polynomial of degree two satisfying
the conditions and
and either (a) or
(b)
we find that
and
We can take
is the interpolating polynomial of degree three satisfying the conditions
and
We find that
and
We can use any one interpolating polynomial discussed above to find approximation
to
A similar approach can be adopted when y_{n+1} has to be rejected (because
of error test failure) and the step repeated with a new stepsize h and also
when the iteration diverges and the stepsize is halved.
Starting technique and numerical results: Since the two step version of Newmark’s method requires the initial calculation of y_{0} and y_{1}, given y_{0} and ý_{0} (initial conditions), we need to calculate the starting value y_{1} before applying the twostep version of Newmark’s method and form the error estimate. If the error test is satisfied and y_{1} is accepted then we proceed to the two step version of Newmark’s method. For the onestep version of Newmark’s method, we use the predictor given by (28) while (29) for the twostep version of Newmark’s method.
On change of stepsize, when we apply the twostep version of Newmark’s method, we have considered different choices of interpolating polynomial discussed above. For the numerical results, we have used the following notation for these choices.
NM1ST: One step version of Newmark method.
NM2P1: Two step version of Newmark method and apply the interpolating
polynomial of degree one given by (33) to approximate
NM2P2: Two step version of Newmark method and apply the interpolating
polynomial of degree two given by (34) and (35) to approximate
NM2P3: Two step version of Newmark method and apply the cubic Hermite
interpolant given by (38) and (39) to approximate
For the numerical results, we have solved the following initial value problems
Exmple 1: The scalar nonlinear problem
The next two problems both use the same differential equation but with different
initial conditions. These nonlinear differential equations are
Exmple 2: y_{1}(0) = 1, ý_{1}(0) = 0, y_{2}(0)
= 10^{4}, ý_{2}(0) = 0.
Exmple 3: y_{1}(0) = 1, ý_{1}(0) = 0, y_{2}(0)
= 10^{8}, ý_{2}(0) = 0.
The above problems have been solved for t∈[0,6]. The error at the end
point is obtained by comparing the computed solution with the solution obtained
by using a fixed step code with a small stepsize for example 1 and for the first
equation of examples 2 and 3. For the second equation of example 2 and 3 we
have use the exact solution. We denote the error at the end point by MAXERR
where for the scalar equation it is Error at t =6 and for the system
it is _{∞}.
For these examples the stepsize is chosen initially to be h = 1.0. The results
obtained for the cases where the maximum number of iteration permitted, Pmax,
is 5.
Table 1: 
Methods are compared for example 1 with Pmax = 5, Tol = 10^{2}
and ε = 10^{2} 

Table 2: 
Methods are compared for example 1 with Pmax = 5, Tol = 10^{4}
and ε = 10^{4} 

Table 3: 
Methods are compared for example 2 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

Table 4: 
Methods are compared for example 2 with Pmax = 5, Tol = 10^{4}
and ε = ^{4} 

Table 5: 
Methods are compared for example 3 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

Table 6: 
Methods are compared for example 3 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

Table 7: 
Methods are compared for example 1 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

Table 8: 
Methods are compared for example 2 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

Table 9: 
Methods are compared for example 3 with Pmax = 5, Tol = 10^{2}
and ε = ^{2} 

In Tables, we give the results of solving these problems with Tol = 10^{2} and 10^{4}.
In each case, we present the following statistics:
• 
Number of evaluation of the differential equation right hand
side f, FCN; 
• 
Number of evaluation of the Jacobian ∂f/∂y, JC; 
• 
Number of iterations overall, NIT; 
• 
Number of iterations in steps where iteration converged, NST; 
• 
Number of steps overall, STP; 
• 
Number of successful steps to complete the integration, SST; 
• 
Number of steps where the stepsize is changes, CS; 
• 
Number of failed steps, FS; 
• 
Number of LU factorisation of the iteration matrix, FT; 
• 
Number of function evaluations by using the interpolating
polynomial of degree one given by (30) to find back values on change of
stepsize, FP1. 
The results are presented in Table16 for
strategy 1 and in Table 79 for strategy
2.