**ABSTRACT**

For the cost function of CMA blind equalization is not satisfied second normal form and RLS algorithm can not using directly, a cascade filtering method was proposed to solve this problem. The cost function is simplified as second normal form in the method and the Wavelet Neural Network (WNN) was used as blind equalizer, then RLS algorithm can be used to update the network parameters to implement blind equalization. Meanwhile the forgetting factor in RLS algorithm was analyzed and adaptive forgetting factor was proposed to improve the performance. The output error can construct a attenuation function to which nonlinear transform was preformed to adaptive adjust the value of forgetting factor. Compared with BP neural network and WNN blind equalization based on gradient descent algorithm and WNN blind equalization based on RLS algorithm with fixed value, the method proposed in this study has faster convergence rate and convergence precision. Acoustic channel simulations and pool experiment proved the method has better performance in underwater communication.

PDF Abstract XML References Citation

**Received:**June 28, 2011;

**Accepted:**August 09, 2011;

**Published:**September 30, 2011

####
**How to cite this article**

*Information Technology Journal, 10: 2440-2445.*

**DOI:**10.3923/itj.2011.2440.2445

**URL:**https://scialert.net/abstract/?doi=itj.2011.2440.2445

**INTRODUCTION**

Multi-path propagation and limited bandwidth are the two main causes which restrict acoustic high-speed communication development, signal distortion leading to Inter-symbol Interference (ISI) at the receiver (Santamaria *et al*., 2004). Adaptive equalization needs regular known training sequence between sender and receiver to capture and track the channel characteristics which should waste the limited bandwidth of acoustic communication. Blind equalization technique was first proposed by Godard (1980) which is a technology to eliminate ISI by receiving signals without any training sequence that can save on bandwidth and improve the quality of communication. Therefore, blind equalization has the important meaning for high speed and high quality underwater acoustic communication in the future.

Neural network blind equalization can apply not only the minimum phase channel but also nonminimum phase channel including nonlinear channel (Ying *et al*., 2005). Blind equalization by WNN with CMA cost function and uses gradient descent algorithm has slow convergence rate and larger steady remaining errors. Compared with the gradient descent algorithm, RLS algorithm possesses better performance (Bin *et al*., 2009). But RLS algorithm cannot directly applied to blind equalization because the cost function is not satisfied second norm form. This study proposes a kind of cascade filtering blind equalization system. A transversal filter cascade in the system before WNN and then the product of output signal of cascade filter and observe signal act as the input of WNN, then the cost function can be simplified to second norm form and RLS algorithm can be used. The cascade filter takes the output of WNN as desired signal and updates weights each iteration of blind equalization. We also analyzed the influence of forgetting factor on the performance of RLS algorithm and proposed an adaptive forgetting factor method which forgetting factor control by a nonlinear transform according to the error of output of WNN blind equalizer. Adaptive forgetting factor can further improve the convergence rate and get higher convergence precision.

**CMA BLIND EQUALIZATION**

The principle of CMA blind equalization (Yin-Bing *et al*., 2010) can be shown as Fig. 1.

x(n) is the input sequence of unknown channel, h(n) is the channel impulse response sequence, the output sequence of channel is y(n), n(n) is gauss white noise adds to y(n), then y(n) as the blind equalizer input sequence is received. Blind equalization is a technology which realize the lossless recovery of send signal x(n) rely on observation sequence y(n).

Fig. 1: | Block diagram of blind equalization by wavelet neural network |

The signal transmission process can be expressed as (Guimaraes *et al*., 2010):

(1) |

(2) |

The cost function of CMA is (Ying and Yuhua, 2009):

(3) |

where, R_{CM} is a constant value defined as:

(4) |

Let W(n) is the impulse response of blind equalizer, then can be obtained as:

(5) |

Then the cost function of CMA can be rewritten as:

(6) |

The traditional CMA blind equalization update W(n) based on gradient descent algorithm.

(7) |

Then the weights update according to gradient descent algorithm (Yan and Fan, 2000) can be written as:

(8) |

(9) |

**BLIND EQUALIZATION BY WNN BASED ON RLS**

RLS algorithm has better performance than gradient descent algorithm at the cost of computational complexity, Underwater acoustic communication environment is very complicated that needs the blind equalization algorithm has faster convergence rate to trace the channel variation characteristics and needs the higher precision to ensure the quality of receiving signals.

Fig. 2: | Tri-level wavelet neural network blind equalizer |

So blind equalization based on RLS algorithm is very suitable for acoustic channel blind equalization.

**Basic principle of blind equalization by WNN:** Blind equalization by WNN is a blind equalization algorithm which takes wavelet **neural network** as equalizer, WNN with three layer is as example as shown in Fig. 2, where w_{ij} is the weights between input layer and hidden layer and w_{j} is the weights between hidden layer and output layer, Set hidden layer input is u_{j}(n), output is I_{j}(n), output layer input is v(n), the WNN state equation is (Xiao-Qin *et al*., 2006):

(10) |

(11) |

(12) |

(13) |

where, f(.) indicated the transfer function between input signal and output signal of output layer. Ψ_{a,b}(.) denote the wavelet transform act as the input signal of hidden layer. Set transfer function f(.) is:

(14) |

(15) |

The function f(.) has a smooth, gradual and monotonic features which is beneficial to distinguish the input sequence.

Fig. 3: | Model of complex neurons |

Parameter α determines the nonlinear characterize of the output of the network, if α = 0, the output of network is linear characteristics, the nonlinear approximation ability of the network is stronger as α becomes larger. However, if α is too larger, the algorithm will be unstable for blind equalization, in practical applications, usually select α∈(0,1).

For PSK or QAM, etc. and phase-related modulation system, the algorithm should be designed to meet the complex system, we must design suitable complexes function as the network transfer function. For complex system, signal transmission divide into two roads before entering the transfer function which one is used to transmit signals of the real part, the other is used to transmit signals of the imaginary part, model structure can be shown as Fig. 3.

The corresponding output layer transfer function can be written as:

(16) |

where, f_{R} (.) and f_{I} (.)-has the same form with transfer function f(.). Corresponding network weights are modified accordingly to actual situation in the form of separation shown as:

(17) |

**Blind equalization by WNN with cascade filter:** The cost function of CMA blind equalization is not meet the second norm form and then the RLS algorithm can not be used in blind equalization by WNN directly. Here define u(n) as:

(18) |

Then the cost function can be written as:

(19) |

The cost function defined as Eq.19 meets the second norm form if u(n) is a variable independent with W(n). But from the definition of u(n) as Eq. 18 knows that the expression of u(n) include W(n) and then u(n) can not regard as the variable independent with W(n).

Fig. 4: | Blind equalization by WNN with cascade filter |

Here we add a cascade filter in the blind equalization system and let the output of the filter approximate W^{T}(n)y(n), the output z(n) of cascade filter written as:

(20) |

where, W_{F}(n) is the weights of cascade filter. According to the definition of z(n), the weights of cascade filter can be updated by minimize the cost function as:

(21) |

and then the input of WNN blind equalizer can in stead of by the product of observation signal y(n) and the output of the cascade filter. So u(n) can be written as:

(22) |

From Eq. 22 can see that the expression of u(n) is not include W(n) and then the cost function Eq. 19 has meet the second norm form, as a result, RLS algorithm can be used. The blind equalization system with cascade filter can be shown as Fig. 4, the output layer’s error is:

(23) |

The corresponding Kalman gain (Azimi-Sadjadi and Liou, 1992; Scalero and Tepedelenlioglu, 1992) is:

(24) |

where, 0<λ<1 is the forgetting factor and P_{o}(n) is the correlation matrix of the input signal of out layer of WNN. I(n) = [I_{1}(n), I_{2}(n),..., I_{N}(n)] is the input of the out layer of WNN which is the output of the hidden layer. P_{o}(n) can be estimated with exponential decay sliding window:

(25) |

where, Λ is unit matrix.

(26) |

The error of hidden layer is:

(27) |

(28) |

where, P_{hj}(n)is the correlation matrix of the input signal of hidden layer and the estimation method is same as P_{o}(n) with exponential decay sliding window, written as:

(29) |

and then the weights of hidden layer of WNN can be updated by:

(30) |

in the WNN blind equalizer, over a large number of trials known that a and b should change small at each iterative times and the severe change of a and b often results in divergent. Therefore, a and b update based on gradient descent algorithm during the iterative process. The weights of cascade filter can be updated by RLS algorithm.

(31) |

(32) |

where, P_{F}(n) is the correlation matrix of the input signal of cascade filter and estimates with exponential decay sliding window as follow:

(33) |

**Adaptive forgetting factor:** In the traditional RLS algorithm, the forgetting factor λ has fixed value during the iteration. From the principle of RLS algorithm, the value of λ reflect the memory degree of priori information, the larger λ keeps more prior information and has higher convergence precision but convergence rate tend to slow. On the contrary, the smaller λ has faster convergence rate but convergence error may tend to larger which even results in unstable for RLS algorithm (Via and Santamaria, 2005). Here we study an adaptive forgetting factor method to improve the performance of RLS algorithm, the basic principle is: (a) At the beginning of blind equalization the blind equalizer has larger error, smaller λ is set to improve the convergence rate; (b) During the process of iteration, the value of λ gradually increase to get higher convergence precision. To implement the idea of adaptive forgetting factor method, at first, a control factor θ(n) is defined as follow:

(34) |

(35) |

(36) |

(37) |

where, 0<α<1 is a constant value control the update of the parameters of Eq. 34. From Eq. 23, , so , according to the triangle inequality theorem, we can get the inequality as follow:

(38) |

Use R_{ee}(n), R_{xx}(n) and R_{RR} estimate |e_{o}(n)|^{2}, and R^{2}_{cm}, respectively during the iteration process and initialized to 1 for all. gradually increase to tend to equal R_{CM} during the iteration and the denominator has the largest value when meet the equal condition, meanwhile, |e_{o}(n)| gradually decrease to tent to equal 0. Thus θ(n)∈(0,1) is a monotonic decreasing value with iterative times and the value is limited to θ(n). But to forgetting factor λ we expect a smaller value at first and gradually increasing. So, we add nonlinear transform to θ(n) as follow:

(39) |

where, c>0 is a constant value. Obviously, γ(n) is a monotonic increasing value with iterative times and the value is limited to (0,1) and γ(n) meets the need of adaptive forgetting factor in RLS algorithm. In order to guarantee the robustness of the algorithms and has the ability to trace the channel vary, we limited the value of λ to [λ_{min}, λ_{max}] and then the adaptive forgetting factor λ adjust as follow:

(40) |

**SIMULATIONS AND POOL EXPERIMENT**

**Computer simulations:** In the simulations, equivalent probability binary sequence is adopted to act as sending signal and QPSK modulation is utilized. Adding noise is band-limited gauss white noise with zero mean. The channel impulse response is h = [0.3132, -0.10400, 0.8908, 0.3134] which is the typical underwater acoustic channel in deep-sea (Dai *et al*., 2010). The channel h is mixed-phase and CMA blind equalization by transverse filter can not get perfect result. In the WNN, choose morlet wavelet (Kania and Farley, 1993) as follow:

(41) |

The parameters for blind equalization by WNN with cascade filter based on RLS algorithm is initialized as: the weights of W_{F}(n) set to zero, λ(0) = 0.85, λ_{min} = 0.7, λ_{max} = 0.9999, the structure of WNN is 25x11x1 and the weights of WNN initialize with random within (-0.5,05). After 500 times Monte Carlo simulations without and with noise interference (SNR = 20 dB), respectively we get the results shown in Fig. 5 and 6. The comparison is in terms of Mean Square Error (MSE) which is defined as:

(42) |

In Fig. 5 and 6, we compared blind equalization by WNN and by BP neural network, LMS-BP denotes the blind equalizer is BP neural network and the algorithm based on gradient descent algorithm, the same as LMS-BP, LMS-WNN use WNN as blind equalizer, RLS-WNN is the method in this paper but has fixed forgetting factor λ (in the simulation λ = 0.99) and MRLS-WNN is the last method of blind equalization by WNN with cascade filter based on RLS algorithm using adaptive forgetting factor. In LMS-BP, the **neural network** structure is set to the same as WNN for comparing as the same condition and step size μ = 0.001 in gradient descent algorithm. From Fig. 5, it can see that RLS-WNN and MRLS-WNN has faster convergence rate and convergence precision than LMS-BP and LMS-WNN, MRLS-WNN has faster convergence rate than RLS-WNN for smaller forgetting factor is set in initial stage.

Fig. 5: | MSE curve (noiseless channel) |

Fig. 6: | MSE curve (SNR=20 dB) |

Because with no noise, the convergence precision of MRLS-WNN and RLS-WNN is almost no difference. In the SNR = 20dB simulation conditions, as shown in Fig. 6, blind equalization by WNN based on RLS algorithm also has better performance than gradient descent algorithm and MRLS has the best convergence rate and convergence precision. For noise interference, RLS-WNN has larger residual errors compare with MRLS-WNN. The results of simulations show the effectiveness of the method proposed in this study.

**Pool experiment:** To validate the practicability of MRLS-WNN blind equalization algorithm, the experimentation has been made in channel pool in Harbin Engineering University. Length of channel pool is 25 m and width is 2.5 m, water depth is 1.8 m. At the bottom of the pool has silver sand which thickness is 30 cm, ceramic tile at both sides and armor plate at both ends reflect and refract the sonar which leads the environment of the pool to a multi-path channel.

Fig. 7(a-d): | Comparison results of blind equalization in pool experiment; (a) LMS-BP, (b) LMS-WNN, (c) RLS-WNN and (d) MRLS-WNN |

Communication in the channel pool would be affected by severe inter symbol interfere.

In the pool experiment, the structure of WNN and BP **neural network** is 30x15x1 and the other parameters initialized as the computer simulation. Distance between sending set and receiver is 10.5 m. Image data modulate by QPSK was used as send data and SNR = 17.6 dB. The result was that Bit Error Rate (BER) is 0.4728 before equalization and BER =0.0465, BER = 0.0223, BER = 0.0034 and BER = 10^{-6} after blind equalization with LMS-BP, LMS-WNN, RLS-WNN and MRLS-WNN, respectively. Fig. 7a-d shows the results of the image after blind equalization by four method above.

**CONCLUSION**

Cascade filter adds to blind equalization system by WNN and the CMA cost function is simplified to second norm form, then RLS algorithm can be used directly for parameters of WNN updating, meanwhile, the method of adaptive forgetting factor is designed according to output error further improves the algorithm performance. Although algorithm increase the computational complexity, in the complex underwater acoustic communication environment to ensure high quality communication at the cost is worth. Simulations and pool experiment show the good results of the method is proposed in this study and this work has significance for the development of the underwater acoustic communication to some extent.

####
**REFERENCES**

- Santamaria, I., C. Pantaleon, L. Vielva and J. Ibanez, 2004. Blind equalization of constant modulus signals using support vector machines. IEEE Trans. Signal Process., 52: 1773-1782.

CrossRef - Godard, D., 1980. Self-recovering equalization and carrier tracking in two-dimensional data communication systems. IEEE Trans. Commun., 28: 1867-1875.

CrossRef - Yin-Bing, Z., Z. Jun-Wei, G. Ye-Cai and L. Jin-Ming, 2010. A constant modulus algorithm for blind equalization in a-stable noise. Applied Acoust., 71: 653-660.

CrossRef - Guimaraes, A., B. Ait-El-Fquih and F. Desbouvries, 2010. A fixed-lag particle smoother for blind SISO equalization of time-varying channels. IEEE Trans. Wirel. Commun., 9: 512-516.

CrossRef - Yan, G. and H. Fan, 2000. A newton-like algorithm for complex variables with applications in blind equalization. IEEE Trans. Signal Process., 48: 553-556.

CrossRef - Azimi-Sadjadi, M.R. and R.J. Liou, 1992. Fast learning process of multilayer neural network using recursive least squares. IEEE Trans. Signal Process., 40: 446-450.

CrossRef - Scalero, R.S. and N. Tepedelenlioglu, 1992. A fast new algorithm for training feedforward neural network. IEEE Trans. Signal Process., 40: 202-210.

CrossRef - Dai, W.J., G.D. Yang, K.L. Zhuang and Y. Guo, 2010. Research on a steady blind equalization algorithm suitable for non-constant modulus signals used in underwater acoustic channels. Ship Sci. Technol., 32: 54-57.

Direct Link