Research Article
Approximation of Fixed Points of Certain Linear Pseudocontractive Map by a Stochastic Iterative Method
Department of Mathematics, Abia State University, Uturu
B.O. Osu
Department of Mathematics, Abia State University, Uturu
Let Rn be n-dimensional Euclidean space. If T and T*are mappings with domains
and with values in Rn, then such T is called strongly Pseudocontractive if there exist α<1 such that the inequality
(1) |
holds for all x,y∈D(T) while T* is said to be strongly accretive if
(2) |
Let M be n x n matrix with positive elements
We assume that the iterates
are defined and finite and that for each pair of indices i,j there exists an integer t>0 depending in general on i and j such that
We define
(3) |
where M is considered as an operator acting on column vectors having positive entries.
Then elementary calculation yields
(4) |
so that clearly, T is a pseudocontractive mapping. Hence by the fixed point theorem of Browder (1967) there is a unique fixed point x0∈S with x0 = Tx0. A number of results recurringly used in Economics concern the theorem about sI-T, s>0, where I is identity matrix (Kemeny and Snell, 1960; Solow, 1952). For instance, a well known result relevant in simple direct discussion of linear models of production in econometrics is that the equation (sI-T)x = y has a unique positive solution given by x = (sI-T)-1y for every y, s>r (Solow, 1952).
Sometimes the condition Tx≤x is imposed as a fundamental assumption so that there exist a y≥0 such that Tx = x+c. This assumption is relevant to the productivity in linear models (Gale, 1960). It is shown in Pruitt (1964) that if M is irreducible and sMx≤x for some s>0 and non trivial x, then xj>0 ∀ j.
Thus if s = 1, the problem of finding x* such that
(5) |
is equivalent to finding x*>0 such that Tx* = x*.
The firm connection between the pseudocontractive mappings and the accretive operators is that a mapping T is pseudocontractive if and only if I-T is accretive (Browder, 1967; Chidume, 1987) so that any solution of the equation T*x = 0 is also a solution of Tx = x. This leads us to consider a stochastic sequence {xn} defined by
(6) |
whose limit is the solution of the equation T*x = 0 where T* = 1-T.
Chidume (1987) has proved the
Theorem 1: Suppose S a nonempty closed bounded and convex subset of Lp p≥2 and T:S→S is a lipschitz strongly pseudocontractive mapping of S into itself.
Let {ρk} be a real sequence satisfying:
The main concern of this study is to prove strong convergence with probability one of the stochastic sequence to the fixed point of T.
PRELIMINARIES
We proceed now with some background in notation and definitions. Here, as in the sequel, || x || denotes the Euclidean norm of x∈Rn and given the points x and y in Rn, 〈x, y〉 is the usual inner product in Rn.
For the purpose of this study a random variable will be a numerically valued observable X in a probability space (Ω, f, Ρ). The probability measure P is characterized by the singleton probabilities
(7) |
Let E denote the expectation operator. The expectation of the observation is by definition,
(8) |
If E||x||<∞ then Ez is defined by the requirement that E(a,z) = (a,Ez) for the random element z and a fixed a in Rn, where ||z||,〈z1,z2〉, 〈a, z 〉 are real valued observable random variables in the usual sense.
In order to solve the problem by successive approximation it is convenient to transform the equation to the form
(9) |
so that ∂f(x)/∂x = Tx and since T is positive definite, there exists a minimum of f in Rn and every minimizing sequence coverages to the unique minimum of f at
(10) |
We shall now obtain the estimate of the gradient mapping
(11) |
using the following approximation theory
APPROXIMATION THEORY
We assume that for each set of values x1, x2, , xn, using functions values y(x1), y(x2), .y(xn) as observations, there are uncertainties associated with these observations
(12) |
This can be modelled by adding a white noise process
(13) |
which are independently distributed about zero with common variance σ2.
The resulting model for the stochastic process is then given by
(14) |
for a giving tj∈Rn. From this it follows that if f is transformed to a linear function of x with coefficients
the most efficient method of fitting is the method of least square, which yields un biased estimate
having the least possible variance.
The natural Taylors expansion of a quadratic function of f about a point X* is given by
(15) |
where xc is on the line segment between x and xc and H(xc) is the Hessian of f at xc so that for a fixed k and j = 1, , m.
If y(x1), y(x2), , y(xm) are real-valued independent observable random variables performed on x1, x2, , xm chosen in the neighborhood of a fixed xk
then
(16) |
is identifiable with (15).
(17) |
and
(18) |
Then for a fixed tj in Rn satisfying
(19) |
An easy calculation yields
(20) |
Thus by the condition (19), tj linearizes f so that the relationship between yj and tj, tj = xj-xk is adequately approximated by the model
(21) |
and the least squares approximation
(22) |
exists and is adequate for approximating
such that
for each k and yields a minimum Euclidean distance between the true and the estimated gradient vector
The stochastic gradient type recursive sequence
is suggested, where {ρk} is a sequence of positive scalar to be specified and {dk} is a sequence of independent and identically distributed random vectors.
This procedure is a way of stochastically solving the equation
The stochastic algorithm is thus given as follows:
Let xk be giving an estimate of x*
(a) | compute ∂f ≅ d as in (22) earlier |
(b) | compute ρk |
(c) | compute xk+1 = xk-ρkdk |
(d) | Has the precess converged? That is ||xk+1-xk||≤∂, ∂>0 for a preassigned ∂. Yes:Then xk+1 = xk No:Set k = k+1 and go back to a above. |
CONVERGENCE THEOREM
Theorem 2: Let {ρk} be a real sequence satisfying
Then the stochastic sequence defined by x0∈D(T), xk+1 = xk-ρkdk remains in D(t) and converges strongly to the fixed point of T almost surely
Proof
so that {qj}is a sequence of independent identically distributed random variables with Eqj =0 for each j.
Thus the sequence of partial sums
is a martingale. But
Thus bythe hypothesis . Hence by martingale convergence theorem(Whittle,1976),
we have So that
Hence by Throrem 1, the unique fixed point x*such that
where
satisfying
is limit of the stochastic sequence xj+1 = xj-ρjdj as j→∞ since xj∈D(T) ∀j and S is compact, S contains unique fixed point satisfying
Thus, the unique fixed point x* of T is the limit of every successive approximate
for an arbitrary non-zero starting xo∈S