HOME JOURNALS CONTACT

Information Technology Journal

Year: 2014 | Volume: 13 | Issue: 5 | Page No.: 839-845
DOI: 10.3923/itj.2014.839.845
An Error Compensation Integrated Approach for Signal Conditioning Circuit Based on Wavelet Transform and BP Neural Network
Hui Lu and Ruibo Liu

Abstract: To improve precision of the signal conditioning circuit in an automatic test system, an error compensation approach was proposed based on the Loose Type of Wavelet Neural Network (L-WNN), combining wavelet transform with BP neural network. It was applied to get the error curve which stands for the relationship of input voltage and the error of conditioning circuit. To evaluate the performance of the L-WNN model, the error curve was also compared to it got by BP neural network and regression analysis which applied Least Squares Estimate (LSE). The effect of testing on the compensation result shows that significant improvements can be made and that is an efficient method to compensate the error of the signal conditioning circuit.

Fulltext PDF Fulltext HTML

How to cite this article
Hui Lu and Ruibo Liu, 2014. An Error Compensation Integrated Approach for Signal Conditioning Circuit Based on Wavelet Transform and BP Neural Network. Information Technology Journal, 13: 839-845.

Keywords: wavelet transform, compensating error, Signal conditioning circuit, BP neural network and LSE

INTRODUCTION

In the field of automated test, it is vital to correct tedious errors and increase precision of measurement. Moreover, the signal from the sensor rarely input to the test system directly but usually traverse signal conditioning circuit firstly to control the signal within the range of the data acquisition board. In addition, in order to improve the system security, the isolation operation is often operated. Generally speaking, the function of the signal conditioning circuit is to achieve operations such as amplification, attenuation, filtering and linearization. However, the signal through the conditioning circuit is introduced the error, then affects the measurement results. Thus, it is necessary to compensate for the error of the signal conditioning circuit.

Because of signal attenuation and all kinds of interference in the process of signal transmission, conditioning circuit is the central source of measurement error. In order to solve this problem, it generally needs to obtain error compensation curve which is the approximate relationship function between input signal and measurement error. And then add the error curve to the input of the signal conditioning circuit to improve on measurement precision. The schematic drawing is shown in Fig. 1.

According to the measuring data, regression analysis algorithm modeling can set up a linear or nonlinear regression model (Motao et al., 2013). Zu et al. (2012) combined data fitting method and data fusion technology to get the error curve. Owing to nonlinear characteristics of the effect of conditioning circuit to signal, it needs to apply nonlinear regression analysis to fit the error compensation curve. First of all, according to simple scatter of experimental data, curve type should be selected when carry out the nonlinear regression. Here, 10-order polynomial was made as a fitting curve in the experiment following.

The relationship of the input signal and output signal has to be analyzed in advance so as to choose the right curve type for fitting when use the regression analysis algorithm carry out the error compensation.

Fig. 1: Schematic drawing of the error compensation of signal conditioning circuit

However, the selected fitting curve type is not objective. Beyond that, using a distinctive curve fits the irregular curve will introduce error to the model. Therefore, Kang and Jin (2010) adopted neural network and Zheng et al. (2013) adopted model identification technology to build the error compensation model. Furthermore, Deihimi et al. (2013) employed wavelet transform to improve prediction effect and Ying and Zhenxing (2011) used compact type wavelet neural network as blind equalizer to improve the ability of underwater communication. We made a synthesis of the advantages of the above studies and proposed a loose type of WNN to improve the accuracy of the signal conditioning module.

ERROR ANALYSIS OF SIGNAL CONDITIONING CIRCUIT

The signal conditioning circuit is mainly composed of amplifying and attenuation circuit. Therefore, its kernel is the amplifier. Specifications of used and ideal operational amplifier have some differences. Thus, we must consider the errors.

The signal errors can be assorted into 3 classes based on its feature and property.

Systematic error: Under invariable conditions, sample a value many times, the constant or regular error is the systematic error. Generally, the systematic error is uniform and can be decreased and eliminated.

Random error: Under invariable conditions, sample a value many times, the error which varies irregularly, is the random error. The random error is brought on by many factors, such as temperature and noise interference. It can be dealt with applying probability and random process theory.

Gross error: Under a certain measurement condition, the distinct error is the gross error. It can be eliminated judging by certain rules.

In this study, we mainly discuss the method of error compensation for system error and reduce the effect of random error.

MATERIALS AND METHODS

Data set: With the purpose of carrying on error compensation for signal conditioning circuit, the sample of the input signal and output signal of the signal conditioning circuit should be obtained. In this study, data acquisition experiment platform is set up as below which shown in Fig. 2 and two pieces of digital multimeters PXI-4070 measured input voltage and output voltage of conditioning circuit. The voltage range of the output of PXI-6723 was set from-10 to 10 and 0.01 V for adjusting voltage increasing in turn. Therefore, there are 2000 samples of input voltage and output voltage which are separately set S = {s1, s2,…, s2000} and F = {f1, f2,…, f2000}. Further analysis shows that the error samples of conditioning circuit can be set E = {e1, e2,…, e2000}, ei = fi-si, (i = 1, 2,…, 2000). Error compensation curve is a nonlinear function of the input value and the error value can be set E = f(S). The data connected 6 times, therefore, 6 groups of experimental data and then 6 error curves which represented as E1 to E6, were got.

Fig. 2: Data acquisition experimental platform

Fig. 3: Signal decomposing

Methodology

Wavelet transform: Wavelet decomposing is a method to analyze the time series that containing numerous components of diverse frequencies and wavelet transformation has versatile basic functions which can be selected according to your problem. All basic function ψa,b(x) can be derived from the mother wavelet ψk(x) by the dilation and translation processes as expression 1:

(1)

where, the dilation coefficient a and translation coefficient b fall into the set of real numbers R and a>0.The mother wavelet ψk(x) is the Daubechies function in this study.

Daubechies wavelet is a group of orthogonal wavelets which were applied to decompose the signal into approximation branch A and detailed branch D. This process is repeated and the branches A would be the reflection of trend information and suffer less influence of instantaneous noises. On the contrary, branches D contain more detail characters. The process is showing in Fig. 3.

The breaking down is beneficial to the prediction of the error curve. On account of the signal that through wavelet transform is more single and smooth in the frequency domain, based on this, some non-stationary data series can be predicted more accurate applying traditional neural network method (Soltani, 2002).

The Daubechies wavelet (db9) has been used because it is orthogonal and has least root-mean-square error in the experiment compared to others (Pajares and de la Cruz, 2004; Ma et al., 2005).

BP neural network: The neural network has an excellent ability to extract patterns and detect trends from a group of imprecise data. Among a variety of artificial neural network architectures, the single-hidden-layer forward-feed network with back propagation (BP neural network) is the most popular and commonly used method (Bhowmik et al., 2008). Due to the powerful capabilities for solving problems in complex nonlinear systems, it is also adopted in this study

The BP neural network consists of input layer, hidden layer and output layer. The training process includes three steps: feed-forward of the input, calculation and back-propagation of the error and the adjustment of the weight and bias. For a three layer BP neural network, assume has n inputs, h hidden nodes and m outputs, expressed as xi, yj, Zk. The weight between input and hidden node is defined as ω and the weight between the hidden and output node is defined as ψ. The bias of hidden and output node is θ and δ, apply Sigmoid function f(x) as the threshold function. And then, the output neuron of hidden layer and output layer can be represented as:

(2)

(3)

The sigmoid function is:

(4)

The training target is minimum the mean square error.

L-WNN: Wavelet Neural Network (WNN) is an estimation of the learning function. It combines the wavelet and neural network theory and has 2 type structures commonly used, the compact type and loose type. The compact type utilizes wavelets as the basis function to construct a network (Tabaraki et al., 2006) and the loose type is a process of decomposition, reconstruction, training and prediction. The loose type has the advantage of intuitionistic and outstanding performance in streamflow simulation (Ju et al., 2007). Hence, the loose type was adopted. The steps of L_WNN were illustrated in Fig. 4


Fig. 4: Flowchart of L_WNN

The procedure of the method is expressed as following.

First, error samples of conditioning circuit (E = {e1, e2,…e2000}) are used as a time series for the input of discrete wavelet transform. Here, a wavelet function of order 5 and decomposition level 3 was utilized. The wavelet presents an appropriate tradeoff between wave-length and smoothness. The final outputs of the WT are 4 time series which is set:

and:

Second, divide each branch into the training set and testing set. According to the length of the training set, take the k data in the front as training samples and take the next 1 data as the training target of BP neural network. After that, the time window moves forward and the k samples updated. This process repeats until the data of the training set out. Finally, utilize the testing set to predict.

Finally, add the predicted data of each branch up can get the desired predicted data.

The error curve was obtained after the entire procedure finished.

RESULTS AND DISCUSSION

To validate the performance of the L_WNN algorithm, both the BP neural network and 10 to the power of polynomial regression analysis models using LSE were applied to get the error curve. The criterion was Root-mean-square Error (RMSE) and Mean Absolute Percentage Error (MAPE).

In order to analyze the effect of error compensation, took the top 60% of S = {s1, s2,…, s2000} and E = {e1, e2,…, e2000} to train nonlinear relationship and the remaining as a testing set to verify the effect of error compensation in data processing. The 6 groups of acquired error curve were used to make a comparison among L_WNN, BP neural network and existing regression analysis algorithm (employ LSE) which were shown in Table 1. In addition, in order to more clearly observe characteristics, the first 50 points of the error curve E1 to E3 and the corresponding final measurement error were shown in Fig. 5-7.

Fig. 5(a-b): Error curve E1 and compensation results

Fig. 6(a-b): Error curve E2 and compensation results

Fig. 7(a-b): Error curve E3 and compensation results

Table 1: Predicted results of different algorithms

Table 2: Compensation results of different algorithms

After got the error curve, it can be discovered that the L-WNN method can track more particular and instantaneous movements. The prediction precision was improved up to 78% for RMSE compared to LSE. The LSE method can also reflect the tendency but it was inferior in the detail characteristics.

The measurement result after compensation were received by adding the error curve to the input of the signal conditioning module. The maximal error and average error were compared in Table 2. It can be found that the significant improvement can be made through L-WNN. The measurement precision was improved 76% in average relative to LSE and maximal error was also limited.

CONCLUSION

The compensation results indicate that L-WNN is more effective than traditional regression analysis method. With the strong ability to predict, L-WNN method avoids the curse of uncertainty of the polynomial type and its order.

Due to the benefit of the application of L-WNN to the traditional LSE method, the effect would be further enhanced if other optimization algorithms are adopted to perfect L-WNN.

Nevertheless, the L_WNN algorithm has to spend more time to obtain the compensating curve. Therefore, one of our future works is to optimize L_WNN to gain compensation value in speed. Thus, the traditional method can be still used for some applications which need for more speed and less precision.

ACKNOWLEDGMENTS

We would like to thank anonymous reviewers for their helpful comments in improving our manuscript. This study is supported by the Defence Industrial Foundation Technology Development Program of China under Grant No. J1320130001.

REFERENCES

  • Bhowmik, M.K., D. Bhattacharjee, M. Nasipuri, D.K. Basu and K.M. Kundu, 2008. Classification of polar-thermal eigenfaces using multilayer perceptron for human face recognition. Proceedings of the 3rd Conference on Industrial and Information Systems, December 8-10, 2008, Kharagpur, pp: 1-6.


  • Deihimi, A., O. Orang and H. Showkati, 2013. Short-term electric load and temperature forecasting using wavelet echo state networks with neural reconstruction. Energy, 57: 381-401.
    CrossRef    Direct Link    


  • Pajares, G. and J.M. de la Cruz, 2004. A wavelet-based image fusion tutorial. Pattern Recognit., 37: 1855-1872.
    CrossRef    Direct Link    


  • Motao, H., G. Zheng, Z. Guojun and O. Yongzhong, 2013. On the compensation of systematic errors in marine gravity measurements. Marine Geodesy, 22: 183-194.
    CrossRef    Direct Link    


  • Ju, Q., Z. Yu, Z. Hao, C. Zhu and D. Liu, 2007. Hydrologic simulations with artificial neural networks. Proceedings of the 3rd International Conference on Natual Computation, Volume 2, August 24-27, 2007, Haikou, pp: 22-27.


  • Ma, H., C. Jia and S. Liu, 2005. Multisource image fusion based on wavelet transform. Int. J. Inform. Technol., 11: 81-91.
    Direct Link    


  • Kang, P. and Z. Jin, 2010. Neural network sliding mode based current decoupled control for induction motor drive. Inform. Technol. J., 9: 1440-1448.
    CrossRef    Direct Link    


  • Soltani, S., 2002. On the use of the wavelet decomposition for time series prediction. Neurocomputing, 48: 267-277.
    CrossRef    Direct Link    


  • Tabaraki, R., T. Khayamian and A.A. Ensafi, 2006. Wavelet neural network modeling in QSPR for prediction of solubility of 25 anthraquinone dyes at different temperatures and pressures in supercritical carbon dioxide. J. Mol. Graph. Mod., 25: 46-54.
    CrossRef    Direct Link    


  • Ying, X. and L. Zhenxing, 2011. Wavelet neural network blind equalization with cascade filter base on RLS in underwater acoustic communication. Inform. Technol. J., 10: 2440-2445.
    CrossRef    Direct Link    


  • Zheng, X., J. Fu, Y. Wu and J. Chen, 2013. Error compensation of geomagnetic azimuth sensor based on model identification. Inform. Technol. J., (In Press).


  • Zu, L.Y., H.J. Meng and Z.H. Xie, 2012. A compensation model of continuous temperature measurement for molten steel in Tundish. J. Iron Steel Res., 19: 6-11.
    CrossRef    Direct Link    

  • © Science Alert. All Rights Reserved