INTRODUCTION
We are concerned with the approximate numerical integration of the special second order initial value problem
Where, y, f, η and η’ are vectors of dimension m.
A second order system can be solved by converting it to a first order system and then applying a numerical method for first order system. If we let x_{1}(t) = y(t), x_{2}(t) = y’(t), x_{1}(a) = η, x_{2}(a) = η’ and introduce the notation x = [x_{1},x_{2}]^{T}, g = [x_{2},f]^{T}, μ = [η, η’]^{T}, we have a first order system
The sstage RungeKutta method is expressed in the form
where, the d_{r}’s are constants and the terms X_{r} on the right hand side of (3) are given by
We consider the sstage (s = 2, 3, 4) implicit RungeKutta methods of order 2s proposed by Butcher^{[1]}. These methods are Astable^{[2]} and are given in Tableau form by
Where:
Implicit RungeKutta methods are generally expensive to implement because there is a system of nonlinear equations to solve at each step whenever F(X) is nonlinear. For example, if we have a system of n ODE’s and the modified Newton method is used to solve the nonlinear equations which result, then this involves the solution of a system of sn linear equations at each step of the iteration. The local error estimate may be obtained from the equation,
where, y_{n+1}^{(0)} is the predicted value of y_{n+1} and y_{n+1}^{(Q)} is the corrected value obtained on convergence of the iteration scheme for the implicit RungeKutta method. If this local error is acceptably small, we may then proceed. If the error test is not satisfied then we reduce the stepsize by applying the stepsize strategy and start again at the beginning. This approach for fourth order methods was also proposed by Thomas^{[3]}.
To apply the implicit RungeKutta method to the second order system (1), first we must convert the second order system to an equivalent first order system. However, to obtain an efficient algorithm, we exploit the structure of the resulting first order system.
On applying the implicit RungeKutta method to the first order system, there results a system of nonlinear algebraic equations. For the fourth order twostage method, the resulting iteration matrix has the form and is of order s. We consider an iteration scheme in which this iteration matrix is approximated by a perfect square, so that at most one matrix of order n must be factorized at each step. Unfortunately, so far it has not proved possible to adopt a similar approach for the sstage implicit RungeKutta method of order 2s (s = 3 and 4). Also, the perfect square approach for the fourth order method can prove very unreliable^{[4]}. Instead, we use another iteration scheme introduced by Cooper and Butcher^{[5]}. We consider in detail the application of this scheme to the sstage implicit RungeKutta methods of order 2s, s = 2,3,4.
PERFECT SQUARE ITERATION SCHEME
Fourth order Astable implicit RungeKutta method: The fourth order
twostage implicit RungeKutta method for first order systems is given by (6).
We discussed how, for computational convenience, we can write the nonautonomous
system (2) in the form of an autonomous system x’ = F(x). Then the fourth
order method (6) has the form:
Eliminating F(X_{2}) from (10) and (11) gives:
Let,
Applying the modified Newton iteration gives
where, from equations (12) and (13),
and J is an approximation for the Jacobian matrix of F with respect to x. Thus
the modified Newton iteration yields
and
Since matrix products are expensive, especially for large systems and any sparsity
in J will be lost in J^{2} and also any illconditioning in J will be
magnified in its powers, we wish to avoid the calculation of J^{2}.
To achieve this we may write the iteration matrix as a product of two linear
(complex) factors, giving
where,
is the complex conjugate of r_{c}.
Gladwell and Thomas^{[4]} show how to solve this system efficiently using complex arithmetic. We do not consider this approach further, but instead we consider an alternative approach, also discussed by Gladwell and Thomas^{[4]}, in which the iteration matrix is approximated by a perfect square:
where, r_{R }is a real number. (The choice of r_{R} is discussed later in this section.) This can be solved as
Now for the second order system (1),
Also, suppose that
so that (16) can be written in the form
while (17) gives
Premultiplying (18) by gives
so that
and
although, as we shall see, it is not necessary to compute Z_{2} explicitly.
Similarly, premultiplying (19) by gives
and so
From the first of equations (18), we have
Thus, after computing Z_{1} from (20), we must solve
Finally from (19), we have
Thus, one step of the iteration using the perfect square approximation is
We have to solve two systems of n linear equations at each step of the iteration. However, the matrix is the same for both systems so we do not need to perform more than one matrix factorization per step.
Now we need explicit forms for e_{1} and e_{2}. Note that
where,
On defining, and we find that
and
Now from (21), after a little manipulation, we have
Finally, consider
If
Hence
and
To avoid the function evaluations f(w_{1}) and f(v_{1}), we
proceed as follows. From equation (12), we have,
and
Thus
Now from equation (10), we have
Hence
and
By substituting the value of v_{2} from equation (27)
into (28), after a little manipulation, we have
Also, by substituting the value of v_{2} into equation
(29), we have
Substituting the value of v_{2} from equation (27)
into equation (25) gives, after a little manipulation,
Finally, on substituting for f(w_{1}) and f(v_{1}) from equations
(30) and (31) into (26), we obtain
Thus, on convergence of the iteration (after Q iterations, say), we set
As we have seen, we approximate the iteration matrix by a perfect square of
the form (Ir_{R}h_{J})^{2}, where r_{R} is
a real number. The choice gives
a discrepancy only in the linear term of the approximation while the choice
R_{r} = 1/4 gives a discrepancy only in the quadratic term. Gladwell
and Thomas^{[4]} show that the latter choice gives desirable convergence
properties for h small as may be expected. Another possibility considered by
Thomas^{[3]} is to take r_{R} = [β_{1}(u_{+}+u_{})]^{1/4},
in which case the same iteration matrix can be used on transferring to the fourth
order direct hybrid method, provided it is not necessary to change the stepsize.
However, Thomas found that in practice, this choice requires the use of a smaller
initial stepsize and hence more function evaluations are needed overall. For
this reason, we will not consider this choice further. Indeed, in providing
our test results, we only use the choice r_{R} = 1/4.
To obtain a local error estimate we use the approach in which the estimate
Le_{n+1} of the local error in the predicted value y^{(0)}_{n+1}
is obtained from equation (9)
Since the RungeKutta method is fourth order accurate, we require y^{(0)}_{n+1} to be a third order approximation to y(t_{n+1}) so that the estimate will be asymptotically correct.
The predictor we propose to use for the algorithm is based on Hermite interpolation. The Hermite interpolant of degree three to say, may be evaluated at t = t_{n+1}, to give
(This formula is derived in Appendix 4 by Khiyal^{[6]}) The use of
(34) implies that, initially, we must carry out two steps before attempting
to form the error estimate. To start the iteration in the algorithm, we must
predict values of w_{1} and hw_{2}. We use the predictor proposed
by Gladwell and Thomas^{[4]} based on the Hermite interpolant.
We take (These values can be obtained from equations (A4.9) and (A4.10) of Appendix
4, respectively of Khiyal^{[6]}. They cannot be used on the first step
and so, on this step, we use:
After carrying out two steps with the implicit RungeKutta method, we form an estimate of the local error. If the local error test is not satisfied, we reduce the stepsize and start again at the beginning. If the error test is satisfied then the estimated value of the stepsize, , obtained from
can be used for the next step. In equation (35), R is order
of the predeictor used. Suppose that h is the current stepsize. If ,
we continue with the current stepsize h and if
we increase the stepsize to
However, if
we reject the estimate and decrease the stepsize to but not less than ρ_{1}h.
This strategy is just the same as discussed by Khiyal^{[6]}. Note that
the factors ρ_{1}, ρ_{2} and ρ_{3} are
taken to be 0.2, 2.0 and 5.0, respectively.
COOPER AND BUTCHER ITERATION SCHEME
sstage 2sorder implicit RungeKutta methods: We have obtained a perfect square iteration scheme for the twostep fourth order Astable RungeKutta method applied to second order systems. However, we were not able to find a corresponding perfect cube iteration scheme for the threestage sixth order implicit RungeKutta method. Moreover, the perfect square iteration scheme for the fourth order method is not always reliable. For these reasons, in this section, we consider an iteration scheme proposed by Cooper and Butcher^{[5]} which sacrifices the superlinear convergence of the modified Newton method for reduced linear algebra costs. This iteration scheme requires the solution of s systems of n linear equations at each step of the iteration and the coefficient matrix is the same for each system.
We have shown that for computational convenience the nonautonomous system (2) can be written in the form of an autonomous system and then we write the RungeKutta method in autonomous form. Suppose the first order autonomous initial value problem is
Then the sstage RungeKutta method is given by
where, X_{1}, X_{2},..., X_{s }satisfy the sR equations
We may simplify our notation by using direct sums and direct products. Let
and
be column vectors and let
Then equation (38) may be represented by
Where, (A q I) is the direct product of A with the sxs identity matrix and,
in general,
Where A = (a_{ij}). The equation (39) may be solved
by the modified Newton method but because the Jacobian of F is expensive to
evaluate, it is computed only occasionally. Suppose that is an approximation
for this Jacobian. Then the modified Newton iteration scheme is
where:
p
= 1,2,3,… Note that in practice the Jacobian is kept constant for several
time steps. At each iteration, we need to solve a system of linear equations.
Let B and S be real invertible matrices and let L be a strictly lower triangular matrix and r a real constant. Cooper and Butcher^{[5]} propose the following iteration scheme for solving (39)
where,
and is an approximation for the Jacobian matrix of F with respect to x. Note that is block diagonal and that (LqI) is strictly lower triangular. This means that each step of the iteration requires the solution of s sets of n linear equations and when the solution of the jth set of equations is being found, the solutions of the previous j1 sets of equations may be used. For large s, the linear algebra costs for the Cooper and Butcher iteration scheme are much less per iteration than for the modified Newton method, for which a system of sn linear equations has to be solved at each step of the iteration. There are three matrices L, B and S and the parameter r which may be chosen in many ways. w is the relaxation parameter. Cooper and Butcher^{[5]} choose w to give the best rate of convergence for the iteration scheme when the method is applied to the scalar linear test problem x’ =cx, where c is real and negative.
For each sstage method of order 2s there is a matrix S such that:
a real block diagonal matrix. Here M is an integer defined by
The submatrices
are chosen to have the form
except that, when s is odd, .
Also have
the same block diagonal form. When s is odd, L_{M} = [0] and B_{M}
= [β].
In introduction the sstage implicit Rungekutta methods of order 2s (s = 2,3,4)
are given, in tableau form, by (6), (7) and (8), respectively. We take the matrices
L and B to be the same as those given by Cooper and Butcher^{[5]}. That
is, we take
When s = 2 the method of order 2s has the matrix of coefficients
We may take S = I as suggested by Cooper and Butcher^{[5]}. Then
Following Cooper and Butcher, we take .
The implementation of these methods is discussed later.
When s = 3 the method of order 2s has the matrix of coefficients:
The matrix S is chosen so that
The elements of may be obtained by noting that
and
Then we have
and
By numerical evaluation these give
The matrix S may be obtained by noting that its columns are a linearly independent
set of eigenvectors of the matrix .
These eigenvectors are obtained by numerical calculation. Since s = 3 is odd,
we have L_{2} = [0] and B_{2} = [β] where ,
These values are given by Cooper and Butcher^{[5]}. Again we take
when we consider the implementation of this method later in this section.
When s = 4 the method of order 2s has the matrix of coefficients:
where:
The
matrix S is chosen so that 
The elements of the matrix
may be obtained by
noting that where:
and
Thus we have
By numerical evaluation, we obtain
Then the matrix S is obtained by noting that its first two columns are a linearly
independent set of eigenvectors of and
the third and the fourth columns are a linearly independent set of eigenvectors
of .
Again these eigenvectors are obtained by numerical calculation. As suggested
by Cooper and Butcher, we take
to obtain a better rate of convergence.
Implementation aspects: Now we consider the implementation of the sstage
methods with this iteration scheme for second order systems of the form (1).
(When s = 2 the implementation of the Cooper and Butcher iteration scheme was
discussed by Gladwell and Thomas^{[4]}, who gave an algorithm.). Note
that in (40)
for the first order system (36). Then from (40), we have
Let
where
and
Then equation (43) becomes
where I_{ii1} = 0 when i is odd. For a second order system, we have
where J is an approximation for the Jacobian of f with respect y and (44) can
be written in the form
where
thus,
Now premultiplying (46) by
we obtain
From the first of these equations, we have
Now from the first of (46), we have
and so we may set
Next we calculate .
First consider the residual given
by (42)
Written in component form, this becomes
or, equivalently,
Now making the substitutions
R = 1, 2, 3, ..., s. then from (49), we get
Substituting the values of from
(50)
into (45), we have
where, l_{ii1} when i is odd. These equations may be rewritten as
and
Finally we calculate Consider the equation (41)
Suppose S has the coefficients {S_{ij}}, i, j = 1,2 , ..., s. Then
we have
Thus,
By substituting R = 1,2,…,s, we obtain
Hence one step of the iteration is
where ,
are given by (51) and (52), respectively.
On convergence of the iteration, y_{n+1} and y’_{n+1}
are obtained by substituting
R = 1,2,…s into (46) giving
Note that, to calculate y’_{n+1} we need to evaluate the functions f(w_{R}), R= 1, 2,...s. We would like to avoid having to evaluate these functions. This and other computational aspects are discussed separately below for each of the cases, s = 2,3,4.
Case s = 2. Twostage fourth order method: The twostage fourth order
implicit RungeKutta method is given by (6). For this method equations
(55) become
Now we form f(w_{1}) and f(w_{2}) from (38). From equation
(38) with R = 1, we have
or, equivalently,
Similarly from equation (38) with R = 2, we obtain
By subtracting (60) from (62), we obtain
and substituting into (58), gives
Thus after convergence of the iteration (after Q iterations, say), we form
To start the iteration, we must predict the values of w_{1}, w_{2},w’_{1}
and w’_{2}. We predict w_{1}^{(0)} and w_{2}^{(0)}
by using the interpolating polynomial Q_{3,n} (t) of Khiyal^{[6]}.
That is, we take This predictor cannot be used on the first step and so on this step, we propose to use
Having predicted w_{1}^{(0)} and w_{2}^{(0)} , we may use these values in (59) and (61), to predict w’_{1}^{(0)} and w’_{2}^{(0)} . Now by eliminating w’_{1} form (59) and (61) we get w’_{2} , while by eliminating w’, we obtain w’_{1} . This gives
In our codes, we use the same convergence strategy and action on divergence of the iteration step as discussed for the fourth order implicit RungeKutta method with a perfect square iteration scheme. To form the error estimate we use the formula given by (9) while the predictor is given by (34). Note that the cost of the Cooper and Butcher iteration scheme is the same as the cost of the perfect square iteration scheme (assuming both schemes require the same number of iterations) because we have to solve two systems of n linear equations at each step of the integration for both schemes. However, the matrix is the same for both systems for both schemes so we do not need to perform more than one matrix factorisation per step.
Case s = 3 threestage sixth order method: The threestage sixth order
implicit RungeKutta method is given by (7). For this method equation
(55) and (56) becomes
Now we form f(w_{R}), R = 1,2,3 from equation (38).
By substituting the coefficients of method (7) in equation (38)
for R = 1,2,3 , we obtain
To find f(w_{1}) we eliminate f(w_{2}) and f(w_{3})
from the set of three equations (70), (72)
and (74). Similarly we can find f(w_{2}) or f(w_{3})
by eliminating the other two values from the same set of equations. After a
little manipulation, we obtain
Finally we substitute the values of hf(w_{1}), hf(w_{2}) and
hf(w_{3}) from equations (75), (76)
and (77) into (68) and simplify to give
Thus after convergence of the iteration (after Q iterations, say), we form
To start the iteration, we must predict the values of w_{1},w_{2},w_{3},w’_{1},w’_{2}
and w’_{3}. We predict w_{1}^{(Q)}, w_{2}^{(Q)},
w_{3}^{(Q)} by using the interpolating polynomial Q_{5,n}(t)
given by (A4.12) in Appendix 4 of Khiyal^{[6]}. That is, we take and However
this predictor cannot be used on the first two steps. Instead, on the first
step, we propose to use
On the second step, we take
These can be obtained from equation (A4.9) in Appendix 4 of Khiyal^{[6]}.
Having predicted w_{1}^{(0)}, w_{2}^{(0)} and
w_{3}^{(0)}, we may use these values in the equations obtained
from (69), (71) and (73). To find w’_{1}, we eliminate w’_{2}
and w’_{3} from the set of three equations (69),
(71) and (73). Similarly we can find w’_{2}
or w’_{3} by eliminating the other two values from the same set
of equations. After a little manipulation, we obtain
To form the error estimate we use the formula given by (9). This requires y_{n+1}^{(0)}
to be a fifth order approximation to y(t_{n+1}). For this purpose, we
use the interpolating polynomial of degree five, Q_{5,n}(t) say, which
passes through the points This polynomial may be evaluated at t = t_{n+1}, to give
(This can be derived from equation (A4.12) of Appendix 4 of Khiyal^{[6]} The use of (86) implies that we must carry out three steps before attempting to form the error estimate.
Case s = 4 fourstage eighth order method: The fourstage eighth order
implicit RungeKutta method is given by (8). For this method, equation
(55) and (56) becomes
Where:
Now we form f(w_{R}), R = 1,2,3,4 from equations (38).
By substituting the coefficients of method (8) in equations (38)
for R = 1,2,3,4, we obtain
Here the values of η’_{1}, η_{1} are those given
for method (8). From equations (90), (92),
(94) and (96), after a little manipulation,
we obtain
and
where,
.
Now we substitute the values of f(w_{2})+f(w_{3}) and f(w_{4})+f(w_{1})
from (97) and (98) into (88) and using the values of η_{i}, η’_{i}
given in (8), we have
where,
and
Thus after convergence of the iteration (after Q iterations, say), we have
where,
To start the iteration, we must predict the values of w_{1}, w_{2},
w_{3}, w_{4}, w’_{1}, w’_{2}, w’_{3}
and w’_{4}. We predict w_{1}^{(0)}, w_{2}^{(0)},
w_{3}^{(0)}, w_{4}^{(0)} by using the interpolating
polynomial Q_{7,n}(t) given by equation (A4.13) of Appendix 4 of Khiyal^{[6]}.
That is, we take
However this predictor cannot be used on the first three steps. Instead, on the first step, we propose to use
On the second step, we take
These can be obtained from equation (A4.9) of Appendix 4 of Khiyal^{[6]}.
On the third step, we take
These can be obtained from equation (A4.12) of Appendix 4 of Khiyal^{[6]}.
Having predicted w_{1}^{(0)}, w_{2}^{(0)}, w_{3}^{(0)}, w_{4}^{(0)}, we may use these values in the equations obtained from (89), (91), (93) and (95). After a little manipulation, we obtain the following equations
where,
Thus we first calculate hw’_{4} from (105) and then hw’_{3} from (106). Finally we calculate hw’_{2} and hw’_{1} from (107) and (108), respectively.
At the end of the fourth step, we form an estimate of the error. To do so,
we use the formula given by (9). In this case, we require y^{(0)}_{n+1}
to be a seventh order approximation to y(t_{n+1}) for this purpose,
we use the interpolating polynomial of degree seven, Q_{7,n}(t) say,
interpolating to
This polynomial may be evaluated at t = t_{n+1} to give
(This can be derived from equation (A4.13) of Appendix 4. of Khiyal^{[6]}) The use of (109) implies that we must carry out four steps with the implicit RungeKutta method before attempting to form the error estimate.
Numerical results: We are mainly concerned with solving oscillatory
stiff initial value problem. We have tried a number of explicit scalar (nonstiff)
test problems of the form (1). Thet give similar results and so we restrict
our attention to one oscillatory example.
Example 1: 
y"+sinh(y) = 0, y(0) = 1, y’(0) = 0. 
This is a pure oscillation problem whose solution has maximum amplitude unity
and period approximately six.
To verify that our techniques work for systems, we use as a test problem a moderately stiff system of two equations.
Example 2: 
y"_{1}+sinh(y_{1}+y_{2}) = 0, y_{1}(0)
= 1, y’_{1}(0) = 0
y"_{2}+10^{4}y_{2} = 0, y_{2}(0) = 10^{8},
y’_{2}(0) = 0 
For this example we have deliberately introduce coupling from the stiff (linear) equation. Our intention here is that the stiff oscillatory component y_{2} should be present only at the noise level as otherwise we would expect to choose the stepsize to resolve y_{2}. We have also tried other test problems. For example, we have tried the initial condition y_{2}(0) = 10^{8}, y_{2}(0) = 10^{6}, y_{2}(0) = 10^{4} and y_{2}(0) = 10^{2}. However, in such problems, y_{2} is not insignificant and, in general, a small stepsize has to be used to resolve accurately the rapidly oscillating solution component. These problems are then not really stiff since the small stepsize is required for accuracy, rather than stability, considerations. Thus we restrict ourselves to solving example 2.
Both examples 1 and 2 have been solved for t∈[0,6]. The stepsize is chosen initially to be h = 1 as this is more or less on scale for the problems. However, the stepsize is increase or reduce automatically by the code depending on the local error test and iterations convergence or divergence.
The error at the end point is obtained by comparing the computed solution with the solution obtained by using a fixed step code with a small stepsize for example 1 and for the first equation of example 2. For the second equation of example 2 we have used the exact solution. We denote the error at the end point by MAXERR where for the scalar equation it is Error at t = 6 and for the system it is *Error at t = 6*4. It should be noted that the global error at the end points may give only a rough indication of the accuracy of each solution as, for oscillatory problems, a large global error may be caused by a small phase error and the approximate solution may be a good one. However, corresponding sets of results have been obtained also for ranges [0,1], [0,2],..., [0,5]. For given values of TOL and PMAX, the errors in the computed solution at each of the end points are of similar size to those obtained with t∈[0,6] and the relative performance of the methods is similar.
We present the results for the cases where the maximum number of iterations perimitted for the methods, PMAX, is 5.
We present some statistics on the performance of the methods. The notation used in the Tables throughout this section is as follows.
• 
Number of evaluations of the differential equations right
hand side f, FCN; 
• 
Number of evaluations of the Jacobian ,
JAC; 
• 
Number of iterations overall, NIT; 
• 
Number of iterations on steps where the iteration converge,
NSIT; 
• 
Number of steps overall, NST; 
• 
Number of successful steps to complete the integration, NSST; 
• 
Number of LU factorizations of the iteration matrix, FACT. 
The mnemonic used for the method in the tables are as follows:
• 
Fourth order Implicit RungeKutta method given by (6) with
perfect square iteration scheme, O4PS; 
• 
Fourth order Implicit RungeKutta method given by (6) with
Cooper and Butcher iteration scheme, O4CB; 
• 
Sixth order Implicit RungeKutta method given by (7) with
Cooper and Butcher iteration scheme, O6CB; 
• 
Eighth order Implicit RungeKutta method given by (8) with
Cooper and Butcher iteration scheme, O8CB; 
For the fourth order method we can use two types of interpolating polynomials, namely the cubic polynomial P_{3,n}(t) and the quartic polnomial P_{4,n}(t) to find the back values on a change of stepsize. Almost results with both interpolants are same. Thus we present the results by using cubic interpolating polnomial.
For the sixth order method we can use two types of interpolating polynomials, namely fifth degree interpolating polynomial P_{5,n}(t) and sixth degree interpolating polnomial P_{6,n}(t) to find the back values on a change of stepsize. Again the results with both interpolants are almost the same. Thus we present the results by using fifth degree interpolating polnomial.
For the eighth order method we can use two types of interpolating polynomials,
namely seventh degree interpolating polynomial P_{7,n}(t) and eighth
degree interpolating polnomial P_{8,n}(t) to find the back values on
a change of stepsize. Again the results with both interpolants are almost the
same. Thus we present the results by using seventh degree interpolating polnomial.
Result for example 1 are given in Table 1, 2,
3 and 4 with Tol = 10^{4}, 10^{6}, 10^{8} and 10^{10},
respectively. Result for example 2 are given in Table 5, 6,
7 and 8 with Tol = 10^{4}, 10^{6},
10^{8} and 10^{10}, respectively..
We have also tested the example given by Kramarz^{[7]}.
Example 3: 
y" = 2498y+4998z, y(0) = 2, y’(0) = 0.
z" = 2499y4999z, z(0) = 1, z’ (0) = 1. 
Table 1: 
Comparison of methods for example 1 with Tol = 10^{4} 

Table 2: 
Comparison of methods for example 1 with Tol = 10^{6} 

Table 3: 
Comparison of methods for example 1 with Tol = 10^{8} 

Table 4: 
Comparison of methods for example 1 with Tol = 10^{10} 

Table 5: 
Comparison of methods for example 2 with Tol = 10^{4} 

Table 6: 
Comparison of methods for example 2 with Tol = 10^{6} 

Table 7: 
Comparison of methods for example 2 with Tol = 10^{8} 

Table 8: 
Comparison of methods for example 2 with Tol = 10^{10} 

Table 9: 
Comparison of methods for example 3 with Tol = 10^{4} 

Table 10: 
Comparison of methods for example 3 with Tol = 10^{6} 

Table 11: 
Comparison of methods for example 3 with Tol = 10^{8} 

Table 12: 
Comparison of methods for example 3 with Tol = 10^{10} 

Exact solution of this problem is y(x) = 2cosx, z = cosx. We have find the
solution for x = 4π, with initial h = π/4. Results for example 3 are
given in Table 9, 10, 11
and 12 with Tol = 10^{4}, 10^{6}, 10^{8}
and 10^{10}, respectively.