HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2005 | Volume: 5 | Issue: 3 | Page No.: 411-427
DOI: 10.3923/jas.2005.411.427
Implementation of s-stage Implicit Runge-Kutta Method of Order 2s for Second Order Initial Value Problems
M. Sikander Hayat Khiyal and Khalid Rashid

Abstract: This study is concerned with the approximate solution of the special second order initial value problem. For this purpose we use the s-stage (s = 2,3,4) Implicit Runge-Kutta methods of order 2s. Fourth order method is used by two iterative technique, perfect square iterative scheme and Cooper and Butcher iterative scheme while sixth and eighth order methods are solved by using Cooper and Butcher iterative scheme. We have converted the scheme in such a way that only two function evaluations are required per iteration for fourth order method, three for sixth order method and four for eighth order method. Finally we present the numerical results.

Fulltext PDF Fulltext HTML

How to cite this article
M. Sikander Hayat Khiyal and Khalid Rashid, 2005. Implementation of s-stage Implicit Runge-Kutta Method of Order 2s for Second Order Initial Value Problems. Journal of Applied Sciences, 5: 411-427.

Keywords: iterative technique, Implicit Runge-Kutta method and initial value problems

INTRODUCTION

We are concerned with the approximate numerical integration of the special second order initial value problem

(1)

Where, y, f, η and η’ are vectors of dimension m.

A second order system can be solved by converting it to a first order system and then applying a numerical method for first order system. If we let x1(t) = y(t), x2(t) = y’(t), x1(a) = η, x2(a) = η’ and introduce the notation x = [x1,x2]T, g = [x2,f]T, μ = [η, η’]T, we have a first order system

(2)

The s-stage Runge-Kutta method is expressed in the form

(3)

where, the dr’s are constants and the terms Xr on the right hand side of (3) are given by

(4)

(5)

We consider the s-stage (s = 2, 3, 4) implicit Runge-Kutta methods of order 2s proposed by Butcher[1]. These methods are A-stable[2] and are given in Tableau form by

(6)

(7)

(8)

Where:


Implicit Runge-Kutta methods are generally expensive to implement because there is a system of nonlinear equations to solve at each step whenever F(X) is nonlinear. For example, if we have a system of n ODE’s and the modified Newton method is used to solve the nonlinear equations which result, then this involves the solution of a system of sn linear equations at each step of the iteration. The local error estimate may be obtained from the equation,

(9)

where, yn+1(0) is the predicted value of yn+1 and yn+1(Q) is the corrected value obtained on convergence of the iteration scheme for the implicit Runge-Kutta method. If this local error is acceptably small, we may then proceed. If the error test is not satisfied then we reduce the stepsize by applying the stepsize strategy and start again at the beginning. This approach for fourth order methods was also proposed by Thomas[3].

To apply the implicit Runge-Kutta method to the second order system (1), first we must convert the second order system to an equivalent first order system. However, to obtain an efficient algorithm, we exploit the structure of the resulting first order system.

On applying the implicit Runge-Kutta method to the first order system, there results a system of nonlinear algebraic equations. For the fourth order two-stage method, the resulting iteration matrix has the form and is of order s. We consider an iteration scheme in which this iteration matrix is approximated by a perfect square, so that at most one matrix of order n must be factorized at each step. Unfortunately, so far it has not proved possible to adopt a similar approach for the s-stage implicit Runge-Kutta method of order 2s (s = 3 and 4). Also, the perfect square approach for the fourth order method can prove very unreliable[4]. Instead, we use another iteration scheme introduced by Cooper and Butcher[5]. We consider in detail the application of this scheme to the s-stage implicit Runge-Kutta methods of order 2s, s = 2,3,4.

PERFECT SQUARE ITERATION SCHEME

Fourth order A-stable implicit Runge-Kutta method: The fourth order two-stage implicit Runge-Kutta method for first order systems is given by (6). We discussed how, for computational convenience, we can write the non-autonomous system (2) in the form of an autonomous system x’ = F(x). Then the fourth order method (6) has the form:


(10)

(11)

Eliminating F(X2) from (10) and (11) gives:

(12)

Let,

(13)

Applying the modified Newton iteration gives

(14)

where, from equations (12) and (13),

and J is an approximation for the Jacobian matrix of F with respect to x. Thus the modified Newton iteration yields

and

Since matrix products are expensive, especially for large systems and any sparsity in J will be lost in J2 and also any ill-conditioning in J will be magnified in its powers, we wish to avoid the calculation of J2. To achieve this we may write the iteration matrix as a product of two linear (complex) factors, giving

where,

is the complex conjugate of rc.

Gladwell and Thomas[4] show how to solve this system efficiently using complex arithmetic. We do not consider this approach further, but instead we consider an alternative approach, also discussed by Gladwell and Thomas[4], in which the iteration matrix is approximated by a perfect square:

(15)

where, rR is a real number. (The choice of rR is discussed later in this section.) This can be solved as

(16)

(17)

Now for the second order system (1), Also, suppose that so that (16) can be written in the form

(18)

while (17) gives

(19)

Premultiplying (18) by gives

so that

(20)

and

although, as we shall see, it is not necessary to compute Z2 explicitly.

Similarly, premultiplying (19) by gives

and so

From the first of equations (18), we have

Thus, after computing Z1 from (20), we must solve

Finally from (19), we have

Thus, one step of the iteration using the perfect square approximation is

We have to solve two systems of n linear equations at each step of the iteration. However, the matrix is the same for both systems so we do not need to perform more than one matrix factorization per step.

Now we need explicit forms for e1 and e2. Note that

(21)

where,

On defining, and we find that

and

(22)

Now from (21), after a little manipulation, we have

(23)

(24)

Finally, consider

If Hence

(25)

and

(26)

To avoid the function evaluations f(w1) and f(v1), we proceed as follows. From equation (12), we have,

and

Thus

(27)

Now from equation (10), we have

Hence

(28)

and

(29)

By substituting the value of v2 from equation (27) into (28), after a little manipulation, we have

(30)

Also, by substituting the value of v2 into equation (29), we have

(31)

Substituting the value of v2 from equation (27) into equation (25) gives, after a little manipulation,

(32)

Finally, on substituting for f(w1) and f(v1) from equations (30) and (31) into (26), we obtain

(33)

Thus, on convergence of the iteration (after Q iterations, say), we set

As we have seen, we approximate the iteration matrix by a perfect square of the form (I-rRhJ)2, where rR is a real number. The choice gives a discrepancy only in the linear term of the approximation while the choice Rr = 1/4 gives a discrepancy only in the quadratic term. Gladwell and Thomas[4] show that the latter choice gives desirable convergence properties for h small as may be expected. Another possibility considered by Thomas[3] is to take rR = [-β1(u++u-)]1/4, in which case the same iteration matrix can be used on transferring to the fourth order direct hybrid method, provided it is not necessary to change the stepsize. However, Thomas found that in practice, this choice requires the use of a smaller initial stepsize and hence more function evaluations are needed overall. For this reason, we will not consider this choice further. Indeed, in providing our test results, we only use the choice rR = 1/4.

To obtain a local error estimate we use the approach in which the estimate Len+1 of the local error in the predicted value y(0)n+1 is obtained from equation (9)

Since the Runge-Kutta method is fourth order accurate, we require y(0)n+1 to be a third order approximation to y(tn+1) so that the estimate will be asymptotically correct.

The predictor we propose to use for the algorithm is based on Hermite interpolation. The Hermite interpolant of degree three to say, may be evaluated at t = tn+1, to give

(34)

(This formula is derived in Appendix 4 by Khiyal[6]) The use of (34) implies that, initially, we must carry out two steps before attempting to form the error estimate. To start the iteration in the algorithm, we must predict values of w1 and hw2. We use the predictor proposed by Gladwell and Thomas[4] based on the Hermite interpolant.

We take (These values can be obtained from equations (A4.9) and (A4.10) of Appendix 4, respectively of Khiyal[6]. They cannot be used on the first step and so, on this step, we use:

After carrying out two steps with the implicit Runge-Kutta method, we form an estimate of the local error. If the local error test is not satisfied, we reduce the stepsize and start again at the beginning. If the error test is satisfied then the estimated value of the stepsize, , obtained from

(35)

can be used for the next step. In equation (35), R is order of the predeictor used. Suppose that h is the current stepsize. If , we continue with the current stepsize h and if we increase the stepsize to However, if we reject the estimate and decrease the stepsize to but not less than ρ1h. This strategy is just the same as discussed by Khiyal[6]. Note that the factors ρ1, ρ2 and ρ3 are taken to be 0.2, 2.0 and 5.0, respectively.

COOPER AND BUTCHER ITERATION SCHEME

s-stage 2s-order implicit Runge-Kutta methods: We have obtained a perfect square iteration scheme for the two-step fourth order A-stable Runge-Kutta method applied to second order systems. However, we were not able to find a corresponding perfect cube iteration scheme for the three-stage sixth order implicit Runge-Kutta method. Moreover, the perfect square iteration scheme for the fourth order method is not always reliable. For these reasons, in this section, we consider an iteration scheme proposed by Cooper and Butcher[5] which sacrifices the superlinear convergence of the modified Newton method for reduced linear algebra costs. This iteration scheme requires the solution of s systems of n linear equations at each step of the iteration and the coefficient matrix is the same for each system.

We have shown that for computational convenience the nonautonomous system (2) can be written in the form of an autonomous system and then we write the Runge-Kutta method in autonomous form. Suppose the first order autonomous initial value problem is

(36)

Then the s-stage Runge-Kutta method is given by

(37)

where, X1, X2,..., Xs satisfy the sR equations

(38)

We may simplify our notation by using direct sums and direct products. Let and be column vectors and let Then equation (38) may be represented by

(39)

Where, (A q I) is the direct product of A with the sxs identity matrix and, in general,

Where A = (aij). The equation (39) may be solved by the modified Newton method but because the Jacobian of F is expensive to evaluate, it is computed only occasionally. Suppose that is an approximation for this Jacobian. Then the modified Newton iteration scheme is

where:

p = 1,2,3,… Note that in practice the Jacobian is kept constant for several time steps. At each iteration, we need to solve a system of linear equations.

Let B and S be real invertible matrices and let L be a strictly lower triangular matrix and r a real constant. Cooper and Butcher[5] propose the following iteration scheme for solving (39)

(40)

where,

(41)


(42)

and is an approximation for the Jacobian matrix of F with respect to x. Note that is block diagonal and that (LqI) is strictly lower triangular. This means that each step of the iteration requires the solution of s sets of n linear equations and when the solution of the jth set of equations is being found, the solutions of the previous j-1 sets of equations may be used. For large s, the linear algebra costs for the Cooper and Butcher iteration scheme are much less per iteration than for the modified Newton method, for which a system of sn linear equations has to be solved at each step of the iteration. There are three matrices L, B and S and the parameter r which may be chosen in many ways. w is the relaxation parameter. Cooper and Butcher[5] choose w to give the best rate of convergence for the iteration scheme when the method is applied to the scalar linear test problem x’ =cx, where c is real and negative.

For each s-stage method of order 2s there is a matrix S such that:

a real block diagonal matrix. Here M is an integer defined by

The submatrices are chosen to have the form

except that, when s is odd, . Also have the same block diagonal form. When s is odd, LM = [0] and BM = [β].

In introduction the s-stage implicit Runge-kutta methods of order 2s (s = 2,3,4) are given, in tableau form, by (6), (7) and (8), respectively. We take the matrices L and B to be the same as those given by Cooper and Butcher[5]. That is, we take


When s = 2 the method of order 2s has the matrix of coefficients

We may take S = I as suggested by Cooper and Butcher[5]. Then Following Cooper and Butcher, we take . The implementation of these methods is discussed later.

When s = 3 the method of order 2s has the matrix of coefficients:

The matrix S is chosen so that

The elements of may be obtained by noting that

and

Then we have and By numerical evaluation these give

The matrix S may be obtained by noting that its columns are a linearly independent set of eigenvectors of the matrix . These eigenvectors are obtained by numerical calculation. Since s = 3 is odd, we have L2 = [0] and B2 = [β] where , These values are given by Cooper and Butcher[5]. Again we take when we consider the implementation of this method later in this section.

When s = 4 the method of order 2s has the matrix of coefficients:

where:

The matrix S is chosen so that

The elements of the matrix may be obtained by

noting that where:

and

Thus we have By numerical evaluation, we obtain

Then the matrix S is obtained by noting that its first two columns are a linearly independent set of eigenvectors of and the third and the fourth columns are a linearly independent set of eigenvectors of . Again these eigenvectors are obtained by numerical calculation. As suggested by Cooper and Butcher, we take to obtain a better rate of convergence.

Implementation aspects: Now we consider the implementation of the s-stage methods with this iteration scheme for second order systems of the form (1). (When s = 2 the implementation of the Cooper and Butcher iteration scheme was discussed by Gladwell and Thomas[4], who gave an algorithm.). Note that in (40)

for the first order system (36). Then from (40), we have

(43)

Let where and

Then equation (43) becomes

(44)

where Iii-1 = 0 when i is odd. For a second order system, we have where J is an approximation for the Jacobian of f with respect y and (44) can be written in the form

(45)

where thus,

(46)

Now premultiplying (46) by we obtain

From the first of these equations, we have

(47)

Now from the first of (46), we have

and so we may set

(48)

Next we calculate . First consider the residual given by (42)

Written in component form, this becomes

or, equivalently,

(49)

Now making the substitutions

R = 1, 2, 3, ..., s. then from (49), we get

(50)

Substituting the values of from (50)

into (45), we have

where, lii-1 when i is odd. These equations may be rewritten as

(51)

and

(52)

Finally we calculate Consider the equation (41)

Suppose S has the coefficients {Sij}, i, j = 1,2 , ..., s. Then we have

Thus,

By substituting R = 1,2,…,s, we obtain

(53)

(54)

Hence one step of the iteration is

where , are given by (51) and (52), respectively.

On convergence of the iteration, yn+1 and y’n+1 are obtained by substituting

R = 1,2,…s into (46) giving

(55)

(56)

Note that, to calculate y’n+1 we need to evaluate the functions f(wR), R= 1, 2,...s. We would like to avoid having to evaluate these functions. This and other computational aspects are discussed separately below for each of the cases, s = 2,3,4.

Case s = 2. Two-stage fourth order method: The two-stage fourth order implicit Runge-Kutta method is given by (6). For this method equations (55) become

(57)

(58)

Now we form f(w1) and f(w2) from (38). From equation (38) with R = 1, we have

or, equivalently,

(59)

(60)

Similarly from equation (38) with R = 2, we obtain

(61)

(62)

By subtracting (60) from (62), we obtain

and substituting into (58), gives

Thus after convergence of the iteration (after Q iterations, say), we form

To start the iteration, we must predict the values of w1, w2,w’1 and w’2. We predict w1(0) and w2(0) by using the interpolating polynomial Q3,n (t) of Khiyal[6]. That is, we take This predictor cannot be used on the first step and so on this step, we propose to use

(63)

(64)

Having predicted w1(0) and w2(0) , we may use these values in (59) and (61), to predict w’1(0) and w’2(0) . Now by eliminating w’1 form (59) and (61) we get w’2 , while by eliminating w’, we obtain w’1 . This gives

(65)

(66)

In our codes, we use the same convergence strategy and action on divergence of the iteration step as discussed for the fourth order implicit Runge-Kutta method with a perfect square iteration scheme. To form the error estimate we use the formula given by (9) while the predictor is given by (34). Note that the cost of the Cooper and Butcher iteration scheme is the same as the cost of the perfect square iteration scheme (assuming both schemes require the same number of iterations) because we have to solve two systems of n linear equations at each step of the integration for both schemes. However, the matrix is the same for both systems for both schemes so we do not need to perform more than one matrix factorisation per step.

Case s = 3 three-stage sixth order method: The three-stage sixth order implicit Runge-Kutta method is given by (7). For this method equation (55) and (56) becomes

(67)

(68)

Now we form f(wR), R = 1,2,3 from equation (38). By substituting the coefficients of method (7) in equation (38) for R = 1,2,3 , we obtain

(69)

(70)

(71)

(72)

(73)

(74)

To find f(w1) we eliminate f(w2) and f(w3) from the set of three equations (70), (72) and (74). Similarly we can find f(w2) or f(w3) by eliminating the other two values from the same set of equations. After a little manipulation, we obtain

(75)

(76)

(77)

Finally we substitute the values of hf(w1), hf(w2) and hf(w3) from equations (75), (76) and (77) into (68) and simplify to give

Thus after convergence of the iteration (after Q iterations, say), we form

(78)

(79)

To start the iteration, we must predict the values of w1,w2,w3,w’1,w’2 and w’3. We predict w1(Q), w2(Q), w3(Q) by using the interpolating polynomial Q5,n(t) given by (A4.12) in Appendix 4 of Khiyal[6]. That is, we take and However this predictor cannot be used on the first two steps. Instead, on the first step, we propose to use

(80)

(81)

(82)

On the second step, we take These can be obtained from equation (A4.9) in Appendix 4 of Khiyal[6].

Having predicted w1(0), w2(0) and w3(0), we may use these values in the equations obtained from (69), (71) and (73). To find w’1, we eliminate w’2 and w’3 from the set of three equations (69), (71) and (73). Similarly we can find w’2 or w’3 by eliminating the other two values from the same set of equations. After a little manipulation, we obtain

(83)

(84)

(85)

To form the error estimate we use the formula given by (9). This requires yn+1(0) to be a fifth order approximation to y(tn+1). For this purpose, we use the interpolating polynomial of degree five, Q5,n(t) say, which passes through the points This polynomial may be evaluated at t = tn+1, to give

(86)

(This can be derived from equation (A4.12) of Appendix 4 of Khiyal[6] The use of (86) implies that we must carry out three steps before attempting to form the error estimate.

Case s = 4 four-stage eighth order method: The four-stage eighth order implicit Runge-Kutta method is given by (8). For this method, equation (55) and (56) becomes

(87)

(88)

Where:

Now we form f(wR), R = 1,2,3,4 from equations (38). By substituting the coefficients of method (8) in equations (38) for R = 1,2,3,4, we obtain

(89)

(90)

(91)

(92)

(93)

(94)

(95)

(96)

Here the values of η’1, η1 are those given for method (8). From equations (90), (92), (94) and (96), after a little manipulation, we obtain

(97)

and

(98)

where, . Now we substitute the values of f(w2)+f(w3) and f(w4)+f(w1) from (97) and (98) into (88) and using the values of ηi, η’i given in (8), we have

where, and

Thus after convergence of the iteration (after Q iterations, say), we have

(99)

(100)

where,

To start the iteration, we must predict the values of w1, w2, w3, w4, w’1, w’2, w’3 and w’4. We predict w1(0), w2(0), w3(0), w4(0) by using the interpolating polynomial Q7,n(t) given by equation (A4.13) of Appendix 4 of Khiyal[6]. That is, we take

However this predictor cannot be used on the first three steps. Instead, on the first step, we propose to use

(101)

(102)

(103)

(104)

On the second step, we take

These can be obtained from equation (A4.9) of Appendix 4 of Khiyal[6]. On the third step, we take

These can be obtained from equation (A4.12) of Appendix 4 of Khiyal[6].

Having predicted w1(0), w2(0), w3(0), w4(0), we may use these values in the equations obtained from (89), (91), (93) and (95). After a little manipulation, we obtain the following equations

(105)

(106)

(107)

(108)

where,

Thus we first calculate hw’4 from (105) and then hw’3 from (106). Finally we calculate hw’2 and hw’1 from (107) and (108), respectively.

At the end of the fourth step, we form an estimate of the error. To do so, we use the formula given by (9). In this case, we require y(0)n+1 to be a seventh order approximation to y(tn+1) for this purpose, we use the interpolating polynomial of degree seven, Q7,n(t) say, interpolating to

This polynomial may be evaluated at t = tn+1 to give

(This can be derived from equation (A4.13) of Appendix 4. of Khiyal[6]) The use of (109) implies that we must carry out four steps with the implicit Runge-Kutta method before attempting to form the error estimate.

Numerical results: We are mainly concerned with solving oscillatory stiff initial value problem. We have tried a number of explicit scalar (nonstiff) test problems of the form (1). Thet give similar results and so we restrict our attention to one oscillatory example.

Example 1: y"+sinh(y) = 0, y(0) = 1, y’(0) = 0.

This is a pure oscillation problem whose solution has maximum amplitude unity and period approximately six.

To verify that our techniques work for systems, we use as a test problem a moderately stiff system of two equations.

Example 2: y"1+sinh(y1+y2) = 0, y1(0) = 1, y’1(0) = 0
y"2+104y2 = 0, y2(0) = 10-8, y’2(0) = 0

For this example we have deliberately introduce coupling from the stiff (linear) equation. Our intention here is that the stiff oscillatory component y2 should be present only at the noise level as otherwise we would expect to choose the stepsize to resolve y2. We have also tried other test problems. For example, we have tried the initial condition y2(0) = 10-8, y2(0) = 10-6, y2(0) = 10-4 and y2(0) = 10-2. However, in such problems, y2 is not insignificant and, in general, a small stepsize has to be used to resolve accurately the rapidly oscillating solution component. These problems are then not really stiff since the small stepsize is required for accuracy, rather than stability, considerations. Thus we restrict ourselves to solving example 2.

Both examples 1 and 2 have been solved for t∈[0,6]. The stepsize is chosen initially to be h = 1 as this is more or less on scale for the problems. However, the stepsize is increase or reduce automatically by the code depending on the local error test and iterations convergence or divergence.

The error at the end point is obtained by comparing the computed solution with the solution obtained by using a fixed step code with a small stepsize for example 1 and for the first equation of example 2. For the second equation of example 2 we have used the exact solution. We denote the error at the end point by MAXERR where for the scalar equation it is |Error at t = 6| and for the system it is |*Error at t = 6|*4. It should be noted that the global error at the end points may give only a rough indication of the accuracy of each solution as, for oscillatory problems, a large global error may be caused by a small phase error and the approximate solution may be a good one. However, corresponding sets of results have been obtained also for ranges [0,1], [0,2],..., [0,5]. For given values of TOL and PMAX, the errors in the computed solution at each of the end points are of similar size to those obtained with t∈[0,6] and the relative performance of the methods is similar.

We present the results for the cases where the maximum number of iterations perimitted for the methods, PMAX, is 5.

We present some statistics on the performance of the methods. The notation used in the Tables throughout this section is as follows.

Number of evaluations of the differential equations right hand side f, FCN;
Number of evaluations of the Jacobian , JAC;
Number of iterations overall, NIT;
Number of iterations on steps where the iteration converge, NSIT;
Number of steps overall, NST;
Number of successful steps to complete the integration, NSST;
Number of LU factorizations of the iteration matrix, FACT.

The mnemonic used for the method in the tables are as follows:

Fourth order Implicit Runge-Kutta method given by (6) with perfect square iteration scheme, O4PS;
Fourth order Implicit Runge-Kutta method given by (6) with Cooper and Butcher iteration scheme, O4CB;
Sixth order Implicit Runge-Kutta method given by (7) with Cooper and Butcher iteration scheme, O6CB;
Eighth order Implicit Runge-Kutta method given by (8) with Cooper and Butcher iteration scheme, O8CB;

For the fourth order method we can use two types of interpolating polynomials, namely the cubic polynomial P3,n(t) and the quartic polnomial P4,n(t) to find the back values on a change of stepsize. Almost results with both interpolants are same. Thus we present the results by using cubic interpolating polnomial.

For the sixth order method we can use two types of interpolating polynomials, namely fifth degree interpolating polynomial P5,n(t) and sixth degree interpolating polnomial P6,n(t) to find the back values on a change of stepsize. Again the results with both interpolants are almost the same. Thus we present the results by using fifth degree interpolating polnomial.

For the eighth order method we can use two types of interpolating polynomials, namely seventh degree interpolating polynomial P7,n(t) and eighth degree interpolating polnomial P8,n(t) to find the back values on a change of stepsize. Again the results with both interpolants are almost the same. Thus we present the results by using seventh degree interpolating polnomial. Result for example 1 are given in Table 1, 2, 3 and 4 with Tol = 10-4, 10-6, 10-8 and 10-10, respectively. Result for example 2 are given in Table 5, 6, 7 and 8 with Tol = 10-4, 10-6, 10-8 and 10-10, respectively..

We have also tested the example given by Kramarz[7].

Example 3: y" = 2498y+4998z, y(0) = 2, y’(0) = 0.
z" = -2499y-4999z, z(0) = -1, z’ (0) = 1.

Table 1: Comparison of methods for example 1 with Tol = 10-4

Table 2: Comparison of methods for example 1 with Tol = 10-6

Table 3: Comparison of methods for example 1 with Tol = 10-8

Table 4: Comparison of methods for example 1 with Tol = 10-10

Table 5: Comparison of methods for example 2 with Tol = 10-4

Table 6: Comparison of methods for example 2 with Tol = 10-6

Table 7: Comparison of methods for example 2 with Tol = 10-8

Table 8: Comparison of methods for example 2 with Tol = 10-10

Table 9: Comparison of methods for example 3 with Tol = 10-4

Table 10: Comparison of methods for example 3 with Tol = 10-6

Table 11: Comparison of methods for example 3 with Tol = 10-8

Table 12: Comparison of methods for example 3 with Tol = 10-10

Exact solution of this problem is y(x) = 2cosx, z = -cosx. We have find the solution for x = 4π, with initial h = π/4. Results for example 3 are given in Table 9, 10, 11 and 12 with Tol = 10-4, 10-6, 10-8 and 10-10, respectively.

REFERENCES

  • Butcher, J.C., 1964. Implicit runge-kutta processes. Math. Comput., 18: 50-64.
    Direct Link    


  • Dahlquist, G., 1963. A special stability problem for linear multistep methods. BIT Numer. Math., 3: 27-43.
    CrossRef    Direct Link    


  • Thomas, R.M., 1987. Efficient fourth order P-stable formulae. BIT, 24: 599-614.
    CrossRef    Direct Link    


  • Gladwell, I. and R.M. Thomas, 1990. Efficiency of methods for second order problems. IMA J. Numer. Anal., 10: 181-207.
    CrossRef    Direct Link    


  • Cooper, G.J. and J.C. Butcher, 1983. An iteration scheme for implicit runge-kutta methods. J. Numer. Anal., 3: 127-140.


  • Khiyal, M.S.H., 1991. Efficient algorithms based on direct hybrid methods for second order initial value problems. Ph.D. Thesis, UMIST, Manchester, UK.


  • Kramarz, L., 1980. Stability of collocation methods for the numerical solution of y = f (x, y). BIT Numer. Math., 20: 215-222.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved