INTRODUCTION
According to Granger and Andersen (1978) bilinear models
are formed by adding a bilinear form to the autoregressive/moving average
(ARMA) models leading to
where {e_{t}} is a sequence of i.i.d random variables
with zero mean and finite variance
and e_{t} is independent of X_{s}, s<t. The formal
difference between a bilinear time series model and an ARMA model is the
bilinear term eX.
Bilinear models were first studied in the context of
nonlinear control systems, but their application as time series model
were investigated principally by Granger and Andersen (1978) and Subba
Rao (1981). Following Subba Rao (1981) we represent (1) as BL (p, q, m,
k) where BL is abbreviation for bilinear. Subba Rao et al. (1984)
also gives a comprehensive account of this class of models. Sessay and
Subba Rao (1988, 1991), Akamanan et al. (1986), Gabr (1988), Subba
Rao and Silva (1993) and many other authors have examined various simple
forms of (1) in the context of stationarity, invertibility and estimation.
The motivation for using data values to detect nonlinearity
is provided by a result inherent in the work of Granger and Newbold (1976).
They showed that for a series {X_{t}} which is normal (and therefore
linear)
σ_{k} (X^{2}_{t})
= [σ_{k}(X_{t})]^{2} 
where σ_{k} (.) denotes the lag k autocorrelation.
Any departures from this result presumably would indicate a degree of
non linearity, a fact pointed out by Granger and Andersen (1978).
Granger and Andersen (1978), have also shown that for
single term bilinear time series {X_{t}} satisfying
X_{t} = bX_{t1} e_{tk }+_{
}e_{t} 
()
has the same covariance structure as an ARMA (1,k) process.
We show below that for the pure diagonal bilinear process ()
would have the same covariance structure as an ARMA (p,p) process.
Properties of Squares of PDBL Model
Now consider the pure diagonal bilinear model satisfying
where {e_{t}} is a sequence of i.i.d random variables with zero mean
and constant variance .
Let W_{t }=
We are going to consider three cases namely: k<p,
k = p and k>p where k is the lag of the autocovariance coefficient.
Case 1: k<p
It can be shown that
But, the autocovariance function of a stationary process
{X_{t}} is given by
R(k) = E(X_{t}μ) (X_{t+k}
μ) = E(X_{t}X_{t+k}) μ^{2}.
Therefore,
R_{w}(k) = E (W_{t}W_{t+k}) μ^{2}_{w} 
where R_{w }(k) is the autocovariance function
of W_{t} = X_{t}^{2} at lag k and μ_{w}
= E(X_{t}^{2}). Therefore,
Case 2: k = p
It can easily be shown that:
and
Therefore,
Substituting for
in (4) and simplifying we obtain
Observe that this is a YuleWalker type difference equation.
Case 3: k>p
In this case, it can easily be shown that
Substituting for
in (4) and simplifying, we obtain
This is the YuleWalker equation for an ARMA (P,P) model.
Present study on squares of X_{t} satisfying (1) leads to the
following theorem needed for identification purposes.
Theorem 1
Let {e_{t}} be a sequence of independent and identically distributed
random variables with E(e_{t}) = 0
Suppose there exists a stationary and invertible process {X_{t}}
satisfying
for some constants b_{1},b_{2},...b_{p},
p>0. Then
will be an ARMA (P,P) model.
COMPARISON WITH A LINEAR MODEL
Here it is shown that if {X_{t}} is MA (P), then
{
} is also MA (P). We proceed as follows:
For the MA (P) model
Then
It is easy to show that the following are true:
and
We proceed to treat the autocovariance as follows
CASE 1: k≤p
For k<p, it can be shown that
Taking expectation, we have
Substituting in Eq. 5, we have that
CASE 2: k>p
We recall that
and
Thus
Hence
is also an MA (P).
SIMULATION RESULTS
Here we present some simulation to illustrate the results
obtained in this study. In what follows, the random variable {e_{t}}
are mutually independent and identically distributed as N(0,σ^{2}).
The processes considered are:
X_{t} = 0.7X_{t1}e_{t1}
+ e_{t} 
(10) 
Y_{t} = 0.7+ e_{t}+
0.146e_{t1} 
(11) 
The simulation and estimation were done using MINITAB.
For purposes of illustration, we have without loss of generality taken
σ^{2} = 1 for (10) and (11). We generated for each process
200 observations (X_{1}, X_{2},...X_{200}). The
autocorrelation for X_{t},,y_{t} and were estimated.
The estimator
r_{k} = R(k)/R(0), k = 1,2,3,...
was used to estimate the autocorrelation, where
is the estimate of the autocovariance R(k) and
is the estimate of the mean.
Table 1: 
Showing Estimated Autocorrelation for X_{t}, X^{2}_{t},
y_{t} and y^{2}_{t} 

As these estimators have been discussed in detail by Chatfield (1980) they
have just been stated here. The parameters have been carefully chosen to ensure
the invertibility and stationarity of the processes. Table 1
gives the estimated autocorrelation of the models (10) and (11) and their squares.
X_{t} is seen to identify as an MA (1) under
covariance analysis and at least as ARIMA (1,1) as the theory predicted.
Both y_{t} and would identify as no more than MA (1). Therefore,
looking at the square of a series is a useful way of distinguishing between
a linear and a bilinear model having the same covariance analysis properties.
DISCUSSION AND CONCLUSION
One way of distinguishing between linear and nonlinear
models is to perform a secondorder analysis on the squares of the series.
Some authors have shown that for a series {X_{t}} which is normal
(and therefore linear)
where σ_{k} (.) denotes the lag k
autocorrelation. Any departures from this result presumably would indicate
a degree of nonlinearity, a fact pointed out by Granger and Andersen
(1978).
We have, however, shown in this paper that this result
(12) does not hold for the pure diagonal bilinear model. We have shown
that the covariance structure of the square of a moving sequence time
series is the same as the covariance structure of the original series.
And this result can be used to distinguish between a pure diagonal and
a linear model.