ABSTRACT
This study has proposed two classes of without memory iterative derivative-free methods for solving nonlinear equations. The first class includes three evaluations of the function per full iteration to reach the local order of convergence four. And the second class of methods carries out one more function evaluation than the first one to achieve seventh-order of convergence. The suggested schemes are useful when the calculation of derivatives of the functions is expensive. To show their easy implementations and efficiencies, this study has applied them to solve some numerical examples by comparing with the existing derivative-free methods in literature.
PDF Abstract XML References Citation
How to cite this article
DOI: 10.3923/jas.2011.3442.3446
URL: https://scialert.net/abstract/?doi=jas.2011.3442.3446
INTRODUCTION
Many of the complicated problems in Sciences and Engineering includes the function of nonlinear and transcendental nature in the equation of the general form f (x) = 0 (Chowdhury, 2011). Numerical iterative methods like Newtons iteration:
which is also known as Newton-Raphson method, are often used to obtain the approximate solution of such problems. Since, it is not always possible to obtain its exact solution. There are many methods developed on the improvement of quadratically convergent Newtons method in order to get a superior convergence rate than Newtons method. For this reason, the variants of the Newtons formula have been discussed by Soleymani (2011a, b), whereas other Sargolzaei and Soleymani (2011) and Soleymani and Sharifi (2011) suggested the high-order multi-step iterative methods to find approximate solutions for the simple roots of such nonlinear equations. Multi point iterations have multiple steps to follow the computation route of each step, which is hard but efficient to deal with. The usual expectation from a mathematical iteration of such type is to attain the fast result by using minimum order derivatives and the number of evaluations, which is particularly beneficial for the cases where the higher derivatives are difficult to evaluate.
Steffensen approximated f'(xn) in the Newtons iteration in Steffensen (1933) using forward finite difference of order one to obtain its derivative-free form as:
This scheme is a tough competitor of Newtons method. Since both methods are of second order. And require two evaluations per step, but in contrast to Newtons iteration, Steffensens method is derivative-free.
Algorithm cost is another important factor that decides the selection of a method to particular type of problems. The effective cost of an algorithm is directly affected by the following parameter, the whole number of evaluations required for a method to achieve a special order of convergence. This parameter (measure) is called as efficiency index, which is defined by p1/n, where in p is the order of convergence and n, is the total number of evaluations per computing step. Kung and Traub (1974) conjectured that the maximum convergence order for a multi-point without memory iteration is 2n-1. Due to this, methods that satisfy this hypothesis are called as optimal root solvers. The readers may consult (Soleymani and Mousavi, 2011; Soleymani and Sharifi, 2010; Soleymani, 2011d) for further information on this topic.
This study found these expectations (the measure of efficiency, order and so on) to be acceptable by converting the given nonlinear problems into well defined numerical without memory iterative schemes through the use of last derivative approximation and the approach of weight function. Thereby, obtaining well efficient fourth- and seventh-order classes, highly convergent methods that not only work quicker than conventional methods but also takes lesser iteration steps than those of recently proposed iterative without memory methods. To strengthen this argument, the convergence alongside some numerical examples including comparison with other formulas is furnished in the rest of the study.
CONSTRUCTION OF NOVEL DERIVATIVE-FREE CLASSES
It is easy to check that the follow-up Steffensen-Newtons scheme:
(1) |
is of fourth-order convergence by carrying out four evaluations of the function per full cycle. Note that wn = xn+βf (xn), β∈- {0}. This notation will be used throughout the study. In order to avoid the evaluation of the first derivative in the last step, we consider an approximation of it. Firstly, we replace f' (yn) by the same estimation as done by Steffensen in the first step and second, we build a weight function at the end of the second step to avoid of dying down the convergence rate. Therefore by using divided differences, we take into account:
(2) |
where, t = f (y)/f (x) without the index n. Theorem 1 shows that how the weight function G (t), should be chosen in order to obtain the fourth-order of convergence by consuming only three evaluations of the function.
Theorem 1: Let αεD, be a simple zero of sufficiently differentiable function f: D⊆R→R and let that cj= f(j) (α)/j!, j≥1. If x0 is sufficiently close to α, then, (i): the order of convergence of the solution by the two-step class of without memory methods defined in (2) is four when G (0) = 1:
and |G" (0)|≤∞ and (ii): this solution reads the error equation:
(3) |
Proof: We expand any terms of Eq. 2 around the solution α in the nth iterate. Thus, we write:
(4) |
and also:
(5) |
Accordingly, we attain:
(6) |
Now we should expand f (yn) around the simple root by using Eq. 6. We obtain:
(7) |
Using Eq. 7 and the second step of Eq. 2, we attain:
(8) |
Additionally, we attain that:
(9) |
which shows that G (0) = 1,
and |G" (0)|≤∞, should be considered such that the order arrives at four. Thus, Eq. 2 reads the following error equation:
(10) |
This completes the proof and shows that our class of two-step derivative-free methods reaches the optimal convergence order four with three evaluations of the function per full cycle.
Clearly, any method of our uni-parametric class (2) possesses 1.587 as its optimal efficiency index, which is greater than that of Newtons and Steffensens. Some possible cases of the function G (t) are given in Table 1 in order to make (2) optimal.
Now, we aim to build a higher order class of without memory derivative-free methods, based on Eq. 2. Toward this end, we perform a Newtons method at the third step of a three-step cycle in which its first two steps are Eq. 2. We have:
(11) |
where, its total number of evaluations is five and includes one derivative evaluation at the end of the third step. To construct a high-order class of derivative-free methods, first we approximate f' (zn), by an interpolation polynomial of degree one, f (x) ≈a0 + a1 (x-yn), through the nodes yn and zn and satisfies the interpolation conditions f (x)|yn = f (yn) and f (x)|zn = f (zn). Thus, we attain f' (zn) ≈ f [yn, zn]. Now by reconsidering of weight function approach, we have:
(12) |
where, again t = f (y)/f (x). Theorem 2 shows that how the weight function H (t) should be chosen in order to obtain the seventh-order of convergence by consuming only four evaluations of the function.
Theorem 2: Let α ε D, be a simple zero of a sufficiently differentiable function f: D⊆R→R and let that cj= f(j) (α)/j!, j≥1. If x0 is sufficiently close to α, then, (i): the order of convergence of the solution by the three-step class of without memory methods defined in (12) is seven when, G (0) = 1:
and |G" (0) |≤∞, H (0) = 1, H'(0) = 0,
and |H(3) (0)|≤∞ and (ii): This solution reads the error equation:
(13) |
Proof: Similar notations and symbolic calculations as done in the proof of Theorem 1 leads us to Eq. 4-10 again. At this moment, the Taylors series expansion for f (zn) is required. We have:
(14) |
Additionally, we obtain:
(15) |
which led us to choose H (0) = 1, H'(0) = 0,
and |H(3) (0)≤∞. Now using the last step of Eq. 12 and 15, we attain:
(16) |
This shows that our class of derivative-free methods (12) arrives at seventh-order of convergence by using four evaluations of the function per full iteration, this ends the proof.
Some particular cases of the weight function H (t), which make Eq. 12 seventh-order are given in Table 2. Clearly, any method of the class (12) reaches 1.626 as its efficiency index, which is bigger than that of Eq. 2; also greater than the methods Peng et al. (2011), Soleimani and Soleymani (2011) and Soleymani (2011c).
Table 1: | Some forms of the weight function G (t), β ε R-{0} |
Table 2: | Some forms of the weight function H (t) |
Table 3: | Test nonlinear function, their roots and the starting points |
Table 4: | Comparison of some derivative-free method |
Note that Eq. 12 is not optimal and in order to make it optimal, more complicated weight function should be constructed. This can be considered as future studies in this field.
Choosing the simplest case of the weight functions in Eq. 2 and 12 resulted in the following fourth-order method (β = 1):
(17) |
and similarly in the follow-up seventh-order (β = 1) scheme:
(18) |
NUMERICAL EXAMPLES
In order to demonstrate the accuracy of a method, it is necessary to study the numerical results of the presented scheme and the schemes available in literature. Khattri and Argyros (2011) presented a sixth-order three-step family (KA6). Soleymani and Hosseniabadi (2011) gave another sixth-order method (SH6). Therefore herein, we only compare Eq. 17 and 18 with (KA6)-(α = β = η = 0, κ = 1), SH6 and the Steffensens method under a fair situation. The test functions, their simple roots and the starting approximations are listed in Table 3. Clearly, Eq. 18 performs really better than its competitors existing in literature while it is also totally free from any derivative evaluation per computing step.
All computations were performed in MATLAB 7.6 using variable precision arithmetic (VPA) to increase the number of significant digits. Herein, we accept an approximate solution rather than the exact root, depending on the precision of the computer. Thus, we have considered the following stopping criterion |f (xn)|≤10-500. The results of comparisons are given in Table 4 in terms of the number significant digits for each test function after the specified number of iteration, that is, e.g. 0.1e-156 shows that the absolute value of the given nonlinear function (f1) after three iterations is zero up to 156 decimal places. We also should remark that the accuracy of the iterative root solvers completely relies on the distance between the starting points and the sought zero, i.e., a bad starting approximation may leads to divergency.
CONCLUSION
In order to avoid calculating of the derivatives of the function, which is indeed impossible in real world problems, we have developed two classes of derivative-free without memory iterations. The two-step class consists of three evaluations of the function to achieve the optimal order of convergence four and the efficiency index 1.587. The three-step class uses four pieces of information per full cycle to hit the seventh-order of convergence, while its efficiency index is 1.626. The applicability of two methods of our classes was examined by solving numerical examples in Section 3 and the attained results have shown the reliability of the proposed classes.
REFERENCES
- Khattri, S.K. and I.K. Argyros, 2011. Sixth order derivative free family of iterative methods. Applied Math. Comput., 217: 5500-5507.
CrossRef - Kung, H.T. and J.F. Traub, 1974. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach., 21: 643-651.
CrossRefDirect Link - Peng, Y., H. Feng, Q. Li and X. Zhang, 2011. A fourth-order derivative-free algorithm for nonlinear equations. J. Comput. Applied Math., 8: 2551-2559.
CrossRef - Soleimani, F. and F. Soleymani, 2011. Computing simple roots by a sixth-order iterative method. Int. J. Pure Applied Math., 69: 41-48.
Direct Link - Sargolzaei, P. and F. Soleymani, 2011. Accurate fourteenth-order methods for solving nonlinear equations. Numer. Algorithms, (In Press).
CrossRefDirect Link - Soleymani, F., 2011. Revisit of Jarratt method for solving nonlinear equations. Numer. Algorithms, 57: 377-388.
CrossRef - Soleymani, F., 2011. Regarding the accuracy of optimal eighth-order methods. Math. Comput. Modell., 53: 1351-1357.
CrossRef - Soleymani, F. and V. Hosseniabadi, 2011. New third- and sixth-order derivative-free techniques for nonlinear equations. J. Math. Res., 3: 107-112.
Direct Link - Soleymani, F. and B.S. Mousavi, 2011. A novel computational technique for finding simple roots of nonlinear equations. Int. J. Math. Anal., 5: 1813-1819.
Direct Link - Chowdhury, S.H., 2011. A comparison between the modified homotopy perturbation method and adomian decomposition method for solving nonlinear heat transfer equations. J. Applied Sci., 11: 1416-1420.
CrossRef