INTRODUCTION
Nowadays, many oil fields take the method of water flooding to get the oil. The identity of the situation of water out behavior is an important problem that is urgent to solve in the middle and later period of the oilfield development (Yan et al., 2014). The identity of the waterflooded zone is mainly based on the logcurves that reflect the formation’s physical and chemical properties. How to classify automatically according to this information is a problem in the analysis of the oilfield geology (Xu and Hu, 2010). Artificial perception calls for a lot of work and it’s too slow to meet the actual need. The highest accuracy of the recognition at the present is between 7080%, for the water flooded layer recognition is influenced by a lot of conditions underground (Wang et al., 2012).
To know the condition of the waterflooded zone, in the process of the analysis of the oilfield geology, we often take the method of making a coring well, then guide the analysis of the geology according to the property of the underground core, corresponding the logcurve (Gao and Wang, 2010). Because it cost a big amount of money to make the coring wells, the amount of the coring wells is highly limited (Grbic et al., 2010). Our topic is how to make the data we get reflect the actual curve’s geometric property in the space, using the limited information from the logcurve of the coring well, then to get the right rules and use them in the later geologic analysis (Vapnik, 1995). There are many identification methods of the logcurves, we have many transformations in extracting the characteristics of the curves. In this text, we have disposed the original data with wavelet, thus decreased the complexity of calculation greatly and reflected the geometry characteristics of the original data in the aspects of wave crests and inflection points very well (Cui et al., 2014).
The literatures put forward the model of identification method (Wang et al., 2010). The process neuron has a similar formation with the traditional MP model which is constituted by weight, aggregation and excitation operation (Peng et al., 2013). The difference between the traditional neuron and the process neuron is that the later one calls for timevarying inputting and weighting, its aggregation operation contents multiinput aggregation as well as the accumulation of the time process (Xu et al., 2011). For the training of the process neuron net, the literature has given us a general learning algorithm based on gradient descent algorithm. In this text, we improved the traditional algorithm and put forward a model of identifying process SVM logcurve (Shang et al., 2006). The results of experiment indicate this arithmetic has good identification ability and strong generalization ability on occasion that the number of training swatch is limited.
MATERIALS AND METHODS
Process support vector machine model: The process support vector machine is made up of wavelet transform, Kernel function transform and maximum liberal classification. The output (decision rule) is resolution analysis. Its basic thought:
In this equation, (x1(t), x2(t), x3(t),…, xn(t)) are the input vectors, (y1(t), y2(t), x3(t),…, ym(t)) are vectors we get after wavelet transform. K(y_{i}, y) is the Kernel function. y_{i} (i = 1, 2,..., k) in the equation is the support vector after training. y is the input vector y = (y1(t), y2(t), x3(t),…, yn(t)). Weight Wi = aizi,.b is the constant. The structure is shown in the Fig. 1.
In the work of oil field geological analysis, people get a group of oil reservoir information from different depths after analyzing. Each information of a reservoir is a group of logcurve information, i.e., a group of time functions in different lengths. In order to standardize the information, we must standardized interpolate the information.
The fitted time function must reflect the character of the origin data; this character should present the space geometric characteristics. So, it is not just a simple question of numerical approach.
Basic wavelet theory: Wavelet is a function or signal ψ(x) which meets the following condition in the function space L^{2}(R):

Fig. 1:  Process support vector machine model 
where, R* = R{0} stands for all the nonvanishing real numbers. Sometimes, ψ(x) is also known as wavelet generating functions, with the aforementioned condition called admissible condition. For any real number pair (a, b), in which the parameter a must be a nonvanishing real number, we call the functions of the following form:
The parameter (a, b) dependent continuous wavelet functions which is created by the wavelet generating functions ψ(x). It is also called xiaobo for short in Chinese.
Wavelet transform: Wavelet transform is a kind of new transformational analysis method, it is originated from multiis to represent the function f(t) in L^{2}(R) as a series of stepbystep formulation, each one of it is a form of f(t) after smoothing, they correspond different resolutions separately (Pontil and Verri, 1997).
The sampling step length of wavelet transform is adjustable in the time and space domain for different frequency components. Precisely to say, time decides frequency longer the time, lower the frequency, vice versa. It is just in this sense that wavelet transform is called the mathematics microscope. It can decompose the signals or pics into mixed multiscales, as well as taking the corresponding thickness step length in the certain time domain for the different frequency components, thereby consistently focus on any tiny details. This is where wavelet transform is better than the classic Fourier Transform and windowed FFT (Cristianini et al., 2002).
Support vector machine: The SVM (Support Vector Machine) is a kind of new machine learning method developed on the basis of the statistical learning theory. It focus on the research of statistical law and learning method. The basic thought of the SVM is like the following, to reflect the input vector x to a highdimensional characteristic space Z by the formerly chosen nonlinear mappings and construct a optimal separating hyperplane. The main target of SVM is to get the optimal solution with the existing information, rather than the one when the sample size is nearly infinite. There are several algorithms that are most frequently used (Shang et al., 2003).
• 
Csupport vector classification (CSVC) 
• 
Vsupport vector classification (vSVC) 
• 
Distribution estimation (oneclass SVM) 
• 
Epsilonsupport vector regression (epsilonSVR) 
• 
Vsupport vector regression (vSVR) 
Among all of the methods, CSVC and vSVC are used for classification algorithm; epsilonSVR and vSVR are used for regression algorithm and oneclass SVM, classification assessment (Yan et al., 2013).
In the mapping process from the lowdimensional input space to the highdimensional output space, the space dimensional grows rapidly. For example, under the m times of polynomial map, the original ndimensional input space will be reflected to the O(n^{2}) dimensional space, thus making it difficult to calculate the optimal segregated plane directly in the feature space in the most cases. SVM transforms this problem to the input space to calculate by defining the kernel functions. At present, the main kernel functions contains RBF function, linear function, poly function and sigmoid function.
After nonlinear transformation, we come to consider the following linear classification problems can be divided into two:
In the Eq. 4, the xi is independent and identically distributed.
The property of linear separability reveals that such classification problem bears no empirical risk, according to the theory of Structural Risk Minimization, all we have to do is to minimize the confidence interval. As the confidence interval is the increasing function of the VC dimension h, the Structural Risk Minimization reflects in minimizing the VC dimension h. In order to reduce the repeat of the classification plane, we bind the (w, b) as when the data points x1, x2…, xl w situate in the globe with the radius of r, in the formula of, the VC dimension h≤min{r2A2, N} in SVM, under the situation of linearly separable, the problem of calculating the (w, b) with the minimal expected risk can be attributed as the following:
From the former analysis, we know that this optimization problem means to minimize the bound of the VC dimension when there is no expected risk, thereby minimizing the VC dimension. So, that we say SVM is the approximate realization of the structural risk minimization theory.
Aiming to the former optimization problem, we can use the Lagrange multiplier method which is equivalent to:
Obviously, this optimization problem is a convex optimization problem, so that its local solution must be a global optimal solution. Transforming the classification problem into a convex optimization problem has never been achieved, though the feed forward neural network has made much effort in many ways. The other important meaning of this optimization problem is that it is only related to the inner product and it lays the foundation of the application of the kernel trick. From the optimization problem, we can get the λ_{i}, thus comes:
In the Eq. 7, λ_{i} is the solution towards the dual programming problem given by the former optimization problem. It is one of the most important features of SVM that the vector of the classification hyperplane is the linear combination of the sample points. The data point x_{i} that is corresponding λ_{i} is called support vector.
The final decision function is as following:
Description of the algorithm: The Support Vector Machine (SVM) we choose is the CSVC, in the following algorithm, we separately use three kind of kernel functions, RBF function, poly function and sigmoid function and compare the experiment results.
The algorithm is as the following:
Step 1: 
Reflect the logcurve to the vector space by wavelet transforming 
Step 2: 
Input the vector we get to the SVM training model 
Step 3: 
Output the support vector and the relative parameters 
Step 4: 
Found the SVM curve identification model based on wavelet 
Step 5: 
Input the logcurve that is to be analyzed 
Step 6: 
Output the recognition effect 
RESULTS AND DISCUSSION
The recognition of the waterflooded zone is largely based on the logcurves that can reflect the physical and chemical properties. After the relative analysis and statistics, according to the experience of the field experts, the writer chose Spontaneous Potential (SP), high resolution acoustic transit time (AC), High resolution deep lateral resistivity Rlld and the difference between micro potential and micro gradient, RmnRmg, as the logging feature parameters for the recognition of the waterflooded zone’s water flooded grade and the output is the water flooded grade.
From the limited reservoir data of the core holes, we chose 450 representative waterout reservoir sample to form a training set and 225 reservoir sample to form a test suite. According to the determination method of the pattern classes number, the water flooded grade of the reservoir can be divided into 4 situations, strong water flooding, secondary water flooding, weak water flooding and not flooded.
We deal with the 450 training samples by wavelet transform, then input the results to SVM to train. After training, we get the corresponding support vectors and weight parameters, thus getting the model as shown in Fig. 1. In the experiment, when we use RBF function as the kernel function; we get the most support vectors, the highest classification accuracy in a fast running speed. So, we choose RBF function as the kernel function, the experiment results are shown in Table 1 and 2.
When we back to judge the training sample with the studied SVM actuator, the correct recognition rate is 90.5%; when judging the 305 samples in the test suite, the correct recognition rate is 78.1%. It is a fairly good result in terms of flooded layer’s automatic recognition. When the same data are used in the neural network, the sample accuracy will come to 96.4% but the accuracy of the test suite is only 73.4%. The experiment results are shown in Table 3.
Table 1:  Conditions of supporting vectors obtained by several kernel functions 

Table 2:  Conditions of training speed and accuracy obtained by several kernel functions 

Table 3:  Comparison between BSVM algorithm and process neural network 

CONCLUSION
We can know from the experiment results that although the accuracy of the training sample is only 90.5%, it has a strong ability of generalization. So the process support vector machine presented in this study overcame the problem of long neural network training time and weak generalization ability. Furthermore, it has a very good reference value in solving the problems of pattern identification of time varying system, system identification and simulation modeling.
ACKNOWLEDGMENT
This study was supported by Youth science fund project (2013QN204); Education science “twelfth fiveyear” plan project for 2013 of Heilongjiang province (GBD1213032).