HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2006 | Volume: 6 | Issue: 8 | Page No.: 1794-1797
DOI: 10.3923/jas.2006.1794.1797
Statistically-Oriented Iterative Algorithm for Improved Approximation by Modified Baskakov Operator
Ashok Sahai, M. Raghunadh Acharya and A. Shirley

Abstract: In 1885, the celebrated Weierstrass Approximation Theorem heralded intermittent interest in polynomial approximation, which continues unabated even as of today. In 1957, Baskakov proposed an infinite polynomial approximation of functions. In 1967, Lupas found it of particular use in approximating functions which are believed to be convex. In 1984, Lehnhoff proposed and studied a finite series curtailment of a similar infinite polynomial approximation by Szasz-Mirakjan operator, for good practical reasons. Motivated by the same practical reasons, we have proposed a likewise finite series curtailment of the Baskakov polynomial approximation operator. Further than that, this paper dwells upon constructing an iterative algorithm with an intention of improving the degree of approximation by this, say Modified Baskakov Operator/Polynomial Approximator. The perspective motivating the proposed algorithm is Statistical and works through a fuller use of the information, about the unknown function f being approximated, which is available in terms of its known values at equidistant knots in say [0; 1], without any loss of generality. This information is used Aposteriori to improve upon the approximation by the proposed Modified Baskakov Operator/Polynomial Approximator. This improvement turns out to be available iteratively, which is seminal to the iterative nature of the proposed algorithm of the improvement of the degree of approximation by the Modified Baskakov Operator. The potential of the aforesaid improvement algorithm is tried to be brought forth and illustrated through an Empirical Simulation Study for which the function is assumed to be known in the sense of Simulation.

Fulltext PDF Fulltext HTML

How to cite this article
Ashok Sahai, M. Raghunadh Acharya and A. Shirley, 2006. Statistically-Oriented Iterative Algorithm for Improved Approximation by Modified Baskakov Operator. Journal of Applied Sciences, 6: 1794-1797.

Keywords: simulated empirical study, Approximation and baskakov operator

INTRODUCTION

The problem of approximation arises in many contexts of Numerical Analysis and Computing. Weierstrass (1885) proved his celebrated approximation theorem: if fεC [a, b]; then for every δ > 0, there is a polynomial p such that 2f-p2<δ. In other words, the result established the existence of an algebraic polynomial in the concerned variable capable of approximating the unknown function in that variable, as closely as we please! This result was a big beginning of the mathematicians’ interest in Polynomial Approximation of an unknown function using its values generated experimentally or otherwise at certain chosen knots of the relevant variable. The great Russian mathematician S. N. Bernstein proved the Weirstrass’ theorem in a manner which was very stimulating and interesting in many ways. He first noted a simple but important fact that if the Weirstrass’ theorem holds for C[0, 1], it also holds for C[a, b] and holds conversely. Essentially C[0,1] and C[a,b] are identical, for all practical purposes; they are linearly isometric as normed spaces, order isomorphic as lattices and isomorphic as algebras (rings). Most important contribution in the Bernstein’s proof of this theorem consisted in the fact that Bernstein actually displayed a sequence of polynomials that approximate a given function fε C [0, 1]. If f(x) is any bounded function on [0, 1], the sequence of Bernstein Polynomials for f(x) is defined by:

For the readers not very aware of polynomial approximation, it might be sound orientation for this study to be read along with the references: (Carothers, 1998, 2000; Hedrick, 1927; Lorentz, 1986; Sheilds, 1987; Weierstrass, 1885). For the similar orientations for a numerical analysis and computing context, helpful references would be (Cheney and Kincaid, 1994; Hartley and Wynn-Evans 1979; Polybon, 1992). The referencing of Lehnhoff (1984) and Lupas (1967), on the other hand, will set the reader akin to that of Baskakov (1957), that being the context of this study.

Of many variations and generalizations of the Bernstein’s Polynomial, one very significant generalization particularly for functions believed to be convex (Lupas (1967) has been by Baskakov Operator (1957):

Lehnoff (1984) proposed and studied a finite series curtailment of a similar infinite polynomial approximation by Szasz-Mirakjan operator, for good practical reasons. Motivated by the same practical reasons, we have proposed a likewise finite series curtailment of the Baskakov polynomial approximation operator, say:

where,

is the Partial Sum of the positive weights in the Baskakov’s Operator in its finite series curtailment.

It could be remarked that Lehnhoff (1984) did prove the CONVERGENCE of their similar Modified Szasz-Mirakjan Operator, which was an analogous Finite Series Curtailment of the well-known Szasz-Mirakjan Operator and that likewise one could proceed to prove the CONVERGENCE of the proposed Modified Baskakov Operator.

Motivating observation and the iterative algorithm: Before we take to the detailing of the Iterative Algorithm for improved approximation by Modified Baskakov Operator/approximating polynomial:

we observe the motivating fact seminal to its proposition. Whereas, all the approximating polynomials are concerned with the knots and the weight functions defined over these knots, none of them uses the information available about the unknown function (targeted for the Approximation), through the known values of the function at these knots. Such information could well be used in constructing /modifying the weight function, possibly gainfully.

In fact, as per the Statistical Perspective such an information should be used gainfully in all the Estimation Problems and the Approximation Problem is an estimation problem per this perspective, as we are essentially estimating the unknown function through our weight function, defined at the chosen Knots for the approximation operator, at hand.

In fact, if we particularly consider the polynomial approximation by Positive Linear Operators, we could well observe the fact that the weights being interpretable as probabilities, in the context of using the Operator, say On(f)(x), the desirable/well known Statistical Property of Asymptotic Unbiasedness ensures that the Mathematical Expectation (the value on an average) of our approximating polynomial, namely the estimate On(f)(x) must approach the function, as the number of knots used, namely n becomes very large: E{On(f)(x)}→ f(x), as n →∞.

In the above context and using the aforesaid Statistical Perspective of the approximation being an estimation problem, the estimated error could well be interpreted as the estimated Bias.

As such, therefore, if we adjust this bias, to keep the estimate better, we are not only ensuring the asymptotic unbiasedness of the estimate, but we are eventually accelerating the asymptotic convergence of the approximating polynomial. This very desirable attempt has been seminal to the proposition of our Iterative Algorithm, targeting at improving the approximating polynomial.

This will be feasible, inasmuch as we would be reducing the Error in approximation at each iteration, using the currently available estimate of the error to bring the approximating polynomial closer to the (unknown) function.

Now if we denote Error by E, we have: E(x) ≡ On(f)(x) - f(x). However, E(x) is unknown inasmuch as f(x) is so. Therefore, essentially we have to estimate it and we do so by using the same Modified Baskakov Operator polynomial MBVn(f)(x). The only difference, apparently, would consist in the fact that we have E in place of f and analogously the values of this unknown Error Function E(x) are readily available through the difference between the Known and the Estimated values of the function at these knots: (k/n), respectively.

Hence, if we define the resultant estimating polynomial (apparently of degree n, at the most) by En(f)(x), (keeping in mind implicitly that Bn(f)(x)is the approximating polynomial, without complicating the notations by explicit incorporation of this fact in our notation), we have

Also, we use this polynomial as an Estimated Bias and proceed with the correction, the resultant Improved Modified Baskakov’s Approximating polynomial, at the first go/iteration, say I(1) MBVn(f)(x) will be:

(1)

Therefore, I(1) MBVn(f)(x) = [I-(I-MBVn)2](f)(x)

Apparently, if we proceed exactly analogously for the Improved Modified Baskakov’s Approximating polynomial at the second iteration, we will be led to

Therefore,

(2)

Thus, in general, if we proceed exactly analogously for the Improved Modified Baskakov’s Approximating polynomial at the kth iteration, we will be led to:

(3)

Nevertheless, we have no idea of the potential of the improvement in the approximating polynomials’ power for a closer approximation by the Iterative Improvement Algorithm, at any iteration. Consequently, we are led to the following section of Empirical Simulation Study aimed at bringing forth the improvement potential of our proposed Iterative Algorithm.

The empirical simulation study: To illustrate the gain in efficiency by using our proposed Iterative Algorithm of Improvement of Modified Baskakov’s Operator Polynomial Approximation, we have carried out an Empirical Study. We have taken the example-cases of n = 2,4,7 and 10, (i.e., n + 1 = 3, 5, 8 and 11 as knots ) in the empirical study to numerically illustrate the relative gain in efficiency in using the Algorithm vis-à-vis the Original Modified Baskakov’s Polynomial in each example-case of the n-value.

Essentially, empirical study is a Simulation Empirical Study, wherein we would have to assume that the function to be approximated, namely f(x), is known to us.

Furthermore, we have confined to the illustrations of the relative gain in efficiency by the Iterative Improvement for the following four illustrative functions:

To illustrate the POTENTIAL of improvement with our proposed Iterative Algorithm, we have considered THREE Iterations and numerical values of seven quantities three Percentage Relative Errors (PREs) corresponding to our Improvement Iterations (# = 1, or 2, or 3) (PRE_I(#)MBV[n]), Original Modified Baskakov’s Polynomial (PRE_MBV[n]) and the corresponding three Percentage Relative Gains (PRG’s) in using our Iterative Algorithmic Modified Baskakov’s Polynomial in place of the Original Modified Baskakov’s Polynomial (PRG_I(#)MBV[n]; # = 1(1)3). These quantities are defined, as follows.

The Percentage Relative Error using (original) Modified Baskakov Operator (polynomial) using n intervals in [0, 1], i.e., [(k - 1)/n, k/n];k = 1(1)n:

The Percentage Relative Error using Improvement Iteration (I # 1 or 2 or 3) on Modified Baskakov’s Polynomial using n intervals in [0,1], i.e., [(k - 1)/n, k/n];

k = 1(1)n:

also,

The Percentage Relative Errors respective to the original Modified Baskakov’s Polynomial and respective to the First, Second and the Third Algorithmic Improvement Iteration Polynomials, respectively, for each of the examples and the number of approximation Knots/Intervals; And the Percentage Relative Gains by using the proposed Algorithmic Improvement Iteration: I# (e.g., 1, or 2, or 3) Polynomials with the n intervals in [0,1] over using the (Original) Modified Baskakov’s Polynomial for the approximation of the (Targeted) function, f, are tabulated in the Table 1-4.

Table 1: Algorithmic improvement efficiency [f(x) = exp(x)]

Table 2: Algorithmic improvement efficiency [f(x) = ln(2+x)]

Table 3: Algorithmic improvement efficiency [f(x) = sin(2+x)]

Table 4: Algorithmic improvement efficiency [f(x) = 10x]

CONCLUSIONS

These aforesaid SEVEN numerical quantities have been computed using Maple V Release 3, for all the four illustrative functions, for values of n, namely n = 2, 4, 7, 10. Table 1-4 contain these quantities when the function f(x) has been taken as exp(x), In(2 + x), sin(2 + x) and 10x, respectively.

The Percentage Relative Errors: (PRE’s) for our Algorithmic Iterative Polynomial Approximations are PROGRESSIVELY lower with each subsequent iteration, as compared to that for the Original Modified Baskakov’s Polynomial Approximation, for all the illustrative functions.

The Percentage Relative Gains: (PRG’s) due to the use of our proposed Algorithmic Iterative Polynomial Approximations vis-à-vis the Original Modified Baskakov’s Polynomial Approximation are also found to be PROGRESSIVELY increasing with each subsequent iteration, for all the illustrative functions.

Lastly, it is very heartening to note that when we use TEN (n = 10) intervals, i.e., ELEVEN KNOTS for the polynomial approximation, the Percentage Relative Gain (PRG) becomes almost 100% for the third iteration. Otherwise also, the speed of convergence is highly accelerated by the Iterative Algorithm improvement in the Modified Baskakov’s Polynomial using the Statistical perspective reducing the Bias in the Estimator Approximating Polynomial, which, it is worth noting, again, is nothing but the weighted average of the data: known values of the unknown function f at the n+1 knots. It could also be noted that this perspective of the iterative Improvement could be applied to any Polynomial Approximator, other than Modified Baskakov’s Polynomial Approximation Operator; more particularly to those belonging to the class of Positive Linear Operators, as they admit to the Probabilistic/Statistical perspective rather more readily.

REFERENCES

  • Baskakov, V.A., 1957. An instance of a sequence of linear positive operator in the space of continuous functions. Dokl. Akad. Nauk. SSSR., 113: 249-251.


  • Carothers, N.L., 1998. A Short Course on Approximation Theory. Bowling Green State University, USA


  • Carothers, N.L., 2000. Real Analysis. Cambridge University Press, Cambridge


  • Cheney, W. and D. Kincaid, 1994. Numerical Mathematics and Computing. Brooks/Cole Publishing Company, USA


  • Hartley, P.J. and A. Wynn-Evans, 1979. A Structured Introduction to Numerical Mathematics. Stanley Thornes, UK


  • Hedrick, E.R., 1927. The significance of Weirstrass' theorem. Am. Math. Monthly, 20: 211-213.


  • Lehnhoff, D., 1984. On a modified Szasz-Mirakjan operator. J. Approx. Theory, 42: 278-282.


  • Lupas, A., 1967. Some properties of the linear positive operators (II). Mathematica, 9: 295-298.


  • Polybon, B.F., 1992. Applied Numerical Analysis. PWS-Kent Publishing Company, Boston


  • Sheilds, A., 1987. Polynomial approximation. Math. Intel., 9: 5-7.


  • Weierstrass, K., 1885. Uber die analytische darstellbarkeit sogenannter willkurlicher functionen einer reellen Veranderlichen. Sitzungsberichte der Koniglich Preussischen Akademie der Wissenshcaften zu Berlin, pp: 633-639.

  • © Science Alert. All Rights Reserved