**ABSTRACT**

In view of the low precision of the reconstruction image of Electrical Capacitance Tomography (ECT) at present. A new method of image reconstruction algorithm based on Chebyshev neural net works for Electrical Capacitance Tomography is proposed. This neural network not only expand the identification ability and learning adaptation of the neural network but also has a simple algorithm, a high speed convergence of learning process and excellent characteristics in the linear and nonlinear accurate approximation. The ECT image reconstruction experimental results show that the method can improve the reconstruction image quality and testify the effectiveness of the proposed method.

PDF Abstract XML References Citation

**Received:**January 06, 2011;

**Accepted:**April 19, 2011;

**Published:**June 18, 2011

####
**How to cite this article**

*Information Technology Journal, 10: 1614-1619.*

**DOI:**10.3923/itj.2011.1614.1619

**URL:**https://scialert.net/abstract/?doi=itj.2011.1614.1619

**INTRODUCTION**

The formation and development of ECT (Electrical Capacitance Tomography) technology are in the late 1980s. It has many advantages, such as non-invasive, simple structure, low cost, fast response, good safety performance, wide application and so on. Image reconstruction algorithm is a key technology of ECT. Solving this kind of problems has some difficulties because of the restriction of less independent capacitance measurements, soft market characteristics, the non-linear of the problem to be solved and so on. The distance to the requirements of industrial application is farther. So researching good image reconstruction is important and urgent (Masoudzadeh and Khalilian, 2007).

This study gives a kind of image reconstruction algorithm based on Chebyshev **neural networks** for electrical capacitance tomography. Experimental results show that the algorithm is effective. In terms of the reconstruction object this study investigated, the quality of this image reconstruction is better than LBP. Landweber and conjugate gradient algorithm. So it provided a new and an effective method for the ECT image reconstruction.

**THE BASIC PRINCIPLES OF ECT SYSTEM **

In the schematic of a typical 12-electrode ECT system is shown in Fig. 1. There are three basic components in the ECT system, capacitive transducer, capacitance data acquisition system, image reconstruction calculation (Haiqing and Zhirao, 2000).

Fig. 1: | System structure of ECT |

The basic principle is that each-phase in the multi-phase medium with different dielectric and the capacitive transducer installed outer wall of the insulated pipeline get the electrode capacitance (Larbi and Abdelkader, 2005). Due to the capacitance reflects the distribution of dielectric constant inside the tubes, the computer can reconstruct the image according to a certain algorithm with the data, get more phase distribution on the intuitive and get more information about the distribution of multi-phase flow.

Analysis the capacitance field sensitivity unitary, in order to reduce the error between the theoretical data and the measured data, the distribution of the normalized capacitance field sensitivity between i and j plate is defined by the following form:

In the formula, ε_{2} and ε_{1} are the dielectric constant of the two-phase flow inside the tubes (ε_{2}>ε_{1}), C_{i, j} (t) is the dielectric constant of unit t, the electrode capacitance between i and j plate is ε when other dielectric constants are ε; the electrode capacitance are C_{i, j} (ε_{2}), C_{i, j} (ε_{1}) between i and j plate when the plate are filled with the medium ε_{2} and ε_{1}. μ (t) is the area reciprocal of unit t. Nein is the numbers of the units in the pipeline (Lihui *et al*., 2004).

The distribution of capacitance field sensitivity represents the change of the electrode capacitance when the area of the unit changed from ε_{1} to ε_{2}. It is the changed in density of normalized capacitance. In this thesis, 66 independent capacitance components are got by **finite element** method with capacitance-sensitive 12-electrode array (Hayati, 2007), the sensitive field is subdivided in the form of triangulation, as it is shown in the Fig. 2, there are 192 subdivision units.

**THE CHEBYSHEV ALGORITHM OF THE ECT SYSTEM **

**Chebyshev neural network model:** MIMO Chebyshev

**neural network**model is shown in Fig. 3. The dimension of the input vector n is 66, that the value of the sensor capacitance is 66. The hidden layer unit number L is the number of support vector, which is done L times inner product. The output vector is corresponded to 192 gray split units in the sensitive field, that is m = 192.

Chebyshev **neural network** not only has a simple algorithm, a high speed convergence of learning process but also has excellent characteristics in the linear and nonlinear accurate approximation. Therefore, this model has played a large role in promoting the application of neural network. In this paper, a modified Chebyshev **neural network** model is advanced, which is not only simple, fast learning convergence but also the network input values can be any value (Taiwo and Abubakar, 2011). So it is a MIMO multi-layer feed-forward **neural network** model and it has excellent characteristics in the linear and nonlinear accurate approximation.

The connection weights from input layer to the hidden layer are constant 1. The weights from hidden layer to output layer are:

In the formula, m is the number of the output neurons, g is the number of the hidden neurons, W_{i, j} is the number of the connection weights from the number i hidden layer to the number j output layer, the input sample is uk = [x1, k, x2, k, ..., xn, k]^{T} (k = 1, 2, ..., s),

In the formula, n is the number of the input neurons, s is the number of the input sample, the actual output is:

Fig. 2: | 12-electrode capacitance sensor subdivision map |

Fig. 3: | MIMO Chebyshev neural network model |

Desired output is:

Neural network output:

(1) |

In the Eq. 1 pi is the number i neuron of the hidden layer (or node), the function is Chebyshev orthogonal polynomials (Navin *et al*., 2009), that is:

X is the unipolar S function:

but:

The input range [-∞,+∞] can be transformed into the output range [0, 1] by the unipolar S function. The gradient of the S function can be changed through modifying σ. Neuron output error E_{j, k}:

(2) |

Network performance objective:

(3) |

Number j line weight adjustment:

(4) |

In Eq. 4:

η is the learning rate (0<η<1).

**Features sample extraction:** The basic idea is iterating and adjusting the N samples of sample set U constantly through the improved k-means clustering algorithm and clustering it for M class sample sets Ai (i = 1, 2, ..., M). Sample clustering is shown in Fig. 4.

The distance between the samples points used Euclidean distance:

In the above equation, d (X, X') is the distance between the sample X = [x_{1}, x_{2}, ..., x_{m}] and X' = [x'_{1}, x'_{2}, ..., x'_{m}].

Assuming the sample set U has N samples, it will be gathered for M class, the improved algorithm is described below.

• | Calculating the Euclidean distance between any two samples X and X', Forming set through finding the nearest two dots and removing the two dots from U |

• | Searching the dot nearest A_{i} in set U and moved it into A_{i} |

• | Executing step 2 repeatedly until the number of sample point in A_{i} reached the maximum number of each samples Settings. If the maximum number is too big, the initial clustering center will deviate from the dense regions, we can set the value through the experiment |

• | If, we make, then search two dots nearest to each other in set U, then move them from U into. Return to step 2 |

• | Calculated the arithmetic average of the samples sets, getting M initial clustering center c1 (0), c2 (0), …, cM (0). We can set the initial iteration counter k = 0 |

• | The minimum Euclidean distance node can be searched by using the Eq. 5 |

(5) |

In the Eq. 5, j is the sample number, is the input data, r is the sample number which is the nearest with.

• | Adjusting the center according to the Eq. 6 |

(6) |

In the Eq. 6, β is the learning rate, int is integer operator. Training m samples once, meanwhile, turning down the learning rate.

• | After trained all the samples, according to the Eq. 7, if all center change is less than a small enough value ε0, it shows that we have found the center vector of the basic function, into step 9, otherwise, we make k = k+1,into step 6 |

(7) |

Fig. 4: | Sample clustering |

Fig. 5: | Training line graph |

• | According to the order of childhood which is the Euclidean distance, arranged internal sample serial number of m characteristic class samples |

**Learning algorithm described:** As in conclusion, the overall design of image reconstruction algorithm is as follows.

• | Step 1: The number of hidden layer g is higher than 3, is initialized by small random number, learning rate, arbitrarily small positive number, given training set |

• | Step 2: Choosing M samples which are nearest to the corresponding center vector as the initial training samples B, and then removing these M samples from A_{1},…,A_{M,} making N = N+M |

• | Step 3: Inputting a sample, calculated the neurons' actual output through using the current value |

• | Step 4: Calculating the output value and output error by using the Eq. 1-3 |

• | Step 5: Calculating according to the each line weight adjustment Eq. 4 |

• | Step 6: k = k+1, if k<N, into step 3. Otherwise, steps 7 |

• | Step 7: If J is not higher than, ending this training. Otherwise, enter the other samples, re-order step 2 |

All training samples input periodically until the network convergence and the total output error is less than allowable value.

**Chebyshev neural network simulation example:** In order to proving the validity of the Chebyshev neural network, we give an application example. To consider the nonlinear function’s Chebyshev

**neural network**training model:

We choose the number of training sample is 300, the learning rate is 0.1, the number of the hidden layer is 20. Training line graph which the training effects close with target 0 with the changes of the training sample number is shown in Fig. 5.

**IMAGE RECONSTRUCTION SIMULATION RESULTS**

In the simulation experiment, the input capacitance of Chebyshev **neural network** is 66, output is 792 pixels in the imaging area of the pipeline. In order to obtain more accurate results, according to the different distribution in the pipeline, output is divided into six different parts, each part is 132 pixels. Finally, combined the output of the six sub-network. In the simulation, 501 samples are selected as the sample set, including stratified flow, kernel streaming, annular flow, eccentric flow and two samples of objects. The samples in the training set are obtained through simulation.

The simulation object is the two-phase medium distribution, hidden layer weight adjustment function:

The threshold value for each output unit in this **neural network** is 3.5, higher than 3.5 is regarded as high dielectric. Otherwise, lower than 3.5 is regarded as low dielectric. The quality of image reconstruction is evaluated through accuracy, the percentage proportion of the correct number of pixels in the total number of pixels inside the tubes.

According to the simulation principles above, reconstructing the samples in the training sample set by using Chebyshev **neural network** and comparing with the simulation results of the FLBP **neural network** and RBF reconstruction algorithm. It is shown in Fig. 6. In Fig. 6, the black region stands for the high dielectric (dielectric constant is 6) and the white region stands for the low dielectric (dielectric constant is 1).

The phase separation concentration comparison of different algorithm is shown in Table 1 and the space images relative error of different algorithm is shown in Table 2, the average accuracy of the reconstruction algorithm which we select can reach 99%. One view is the accuracy of image reconstruction by using Chebyshev algorithm is higher than FLBP and RBF image reconstruction algorithm.

Fig. 6: | Image reconstruction simulation results |

Table 1: | Phase separation concentration comparison of different algorithm |

Table 2: | Space images relative error of different algorithm |

**DISCUSSION**

This image reconstruction algorithm has a simple algorithm, a high speed convergence of learning process and excellent characteristics in the linear and nonlinear accurate approximation through the above analysis and the corresponding experiment. Therefore, the model has played a large role in ECT imaging algorithm research. If we increase the training samples, the quality of imaging will be further improved based on Chebyshev **neural network** reconstruction methods.

According to the problems of ECT system, such as soft field characteristics and imaging poor stability. In this paper, we first use k-means clustering algorithm to collect the features samples, reducing the training sample size. The results show that the algorithm is simple, fast and high stability. After concluding the project, the further research will be, by using the computer simulation method, gaining the characteristics information of different flow type and different media and fusing the experiment data information. Then we will further improve the accuracy of image reconstruction algorithm.

**ACKNOWLEDGMENT**

This study was supported by the National Natural Science Foundation of China under Grant (No.60572135) and the Natural Science Foundation of Heilongjiang Province under Grant (No.F200505).

####
**REFERENCES**

- Masoudzadeh, A. and A.R. Khalilian, 2007. Comparative study of clozapine, electroshock and the combination of ECT with clozapine in treatment-resistant schizophrenic patients. Pak. J. Biol. Sci., 10: 4287-4290.

CrossRefPubMedDirect Link - Larbi, M. and B. Abdelkader, 2005. A new look to adaptive temporal radial basis function applied in speech recognition. J. Comput. Sci., 1: 1-6.

CrossRefDirect Link - Hayati, M., 2007. Short term load forecasting using artificial neural network for the West of Iran. J. Applied Sci., 7: 1582-1588.

CrossRefDirect Link - Taiwo, O.A. and A. Abubakar, 2011. An application method for the solution of second order non linear ordinary differential equations by chebyshev polynomials. Asian J. Applied Sci., 4: 255-262.

CrossRef - Navin, A.H., S.H. Es-Hagi, M.N. Fesharaki, M. Mirnia and M. Teshnelab, 2009. Data-oriented model of sine based on chebyshev zeroes. J. Applied Sci., 9: 993-996.

CrossRefDirect Link