HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2007 | Volume: 7 | Issue: 23 | Page No.: 3659-3668
DOI: 10.3923/jas.2007.3659.3668
A Theoretical Approach to Applicability of Artificial Neural Networks for Seismic Velocity Analysis
Christine Baronian, Mohammad Ali Riahi, Caro Lucas and Mohammad Mokhtari

Abstract: In this study, different synthetic earth models have been developed for providing as accurate as possible mapping between inputs and outputs. The input parameters were seismic travel times fed to input layer of ANN and the output parameters were interval velocity and structural dip fed to output layer of ANN. After training the so structured ANN the generalization ability of ANN can create desirable outputs for new input patterns. As dipping layered structures has more occurrences in nature and in subsurface of earth (like anticline shapes of hydrocarbon traps), therefore obtaining an accurate initial velocity model for dipping structures as well as dip values of each layer is an important part of later seismic imaging procedures especially in areas with little or no initial geological information available.

Fulltext PDF Fulltext HTML

How to cite this article
Christine Baronian, Mohammad Ali Riahi, Caro Lucas and Mohammad Mokhtari, 2007. A Theoretical Approach to Applicability of Artificial Neural Networks for Seismic Velocity Analysis. Journal of Applied Sciences, 7: 3659-3668.

Keywords: earth models, seismic velocity, input and output parameters and Artificial neural network

INTRODUCTION

The need for more reliable estimation of characteristics of geological structures based on seismic interpretation especially in oil and gas fields is of growing concern. Therefore the level of accuracy of the characterization can have an economically significant role in many countries. Accuracy of seismic interpretation depends upon factors including but not limited to the quality of the data as well as complexity of the given geological structure. Velocity analysis as the basis of seismic processing can considerably be affected by these factors. In recent years many velocity analysis techniques have been applied by geophysicists. All these techniques seem to be accurate fin many cases yet associated with some difficulties.

Some of the errors which affect the accuracy of the velocity models are recognized to be originating from the existence of noise, direct impact of interpreter and complexity of geological structures (Roeth and Tarantola, 1994). In some methods like iterative migration velocity analysis the direct interference of the interpreter is required in every stage of the process (Al-Yahya, 1989). Furthermore experience based interpretation requires initial geological information (Bradley, 2003).

The common method for seismic velocity analysis comprises computation of the observed arrival times for a given model to be transferred on seismic sections. If the predicted hyperbolas match with the observed ones, then the velocity model is considered to be correct. Otherwise, it is required to change the model until a proper fit between the observed and the computed hyperbolas is achieved (Yilmaz, 1987). This analysis does not always result in an accurate estimation of velocity model.

The inversion methods are based on obtaining the velocity model (parameters) using the observations (reflection travel times), whereas the forward methods produce the seismograms according to a velocity model. Forward methods are numerically simple while inversion methods are associated with much higher complexity regarding to given unknown parameters. Inverse modeling requires time consuming numerical methods to be employed. To minimize or eliminate errors associated with velocity modeling, application of different approaches other than the conventional techniques seems to be a reasonable try. Among these methods is the Artificial Neural Networks (ANN) which has shown to be of relative applicability. During the last twenty years ANN’s have matured to be practical tools in many areas of computing, such as pattern recognition, function approximation, system identification and control and data-analysis (Lampinen, 1997). In recent years there have been several cases of application of ANN’s to many geophysical aspects (Murat and Rudman, 1992; Dai and Mac Beth, 1997; Calderon-Macias et al., 1997; Wang and Mendel, 1992; Poulten et al., 1992; Langer et al., 1996; Calderon-Macias et al., 1998; Calderon-Macias et al., 2000; Wu et al., 2000; Van der Baan and Jutten, 2000).

Some of the relevant applications of ANN in velocity modeling are found in the literature as follows:

Trace editing (Van der Baan and Jutten, 2000)
First break picking (Murat and Rudman, 1992)
Parameter estimation (Calderon-Macias et al., 2000)
Location of subsurface targets (Poulten et al.,1992)
Inversion (Roeth and Tarantola, 1994)

Soft computing techniques such as expert systems, Fuzzy Logic, Neural Networks and other techniques differ from conventional hard computing in that they are tractable, robust, efficient and inexpensive (Nikravesh and Aminzadeh, 2001).

Therefore the fact that ANN can produce a functional relation between the input and output shows that it can be used as a reasonable mapping system. The application of the mapping function to a set of data sets is mathematically simple and fast. This can be a considerable advantage over systematic search techniques, such as simulated annealing or genetic algorithms, since the stability of the results that are found with the neural network can be tested easily with examples not used for the estimation of the mapping function (Langer et al., 1996).

There is always a certain level of residual (noise remaining after initial noise filtering) environmental/ambient noise present on the recorded seismic data, hence, neural network training with noise-free synthetic seismic is less than optimal (Fitzgerald and Bean, 2001).

Trained ANNs have shown to act sufficiently well in cases where the data is noisy and deficient. These Networks can eliminate the disturbing noises and select the important parts of the input (Roeth, 1993; Roeth and Tarantola, 1994).

The objective of this study is to construct an ANN which adopts the reflection arrival times for different dipping layers as the input and compute the earth model as the output using synthetic data. Since geological settings of different features such as oil and gas reservoirs are mainly found to be in form of dipping structures such as anticlines and salt diapir flanks rather than flat horizontal ones, the study focuses on applicability of ANN to such models.

According to literature review, ANN method which has shown to be of relatively high applicability to many geophysical problems was selected as a stand-alone method in comparison with other techniques. As a first step, this study mainly deals with synthetic seismograms which can be generated for many geological models. This is presumed to provide a good basis for assessment of the applicability of ANN and to open rooms for further applications to real data sets of different geological settings.

MATERIALS AND METHODS

ANNs have the ability to map a space which is called input space to another one which is called output space. ANNs can be trained so that they can compute desired output patterns according to input patterns. The characteristic of this technique which makes it outstanding for solving inversion problems is that ANNs not only can compute output patterns for known input patterns but also for unknown ones it can calculate the accurate outputs which is based on extrapolation. This property is called generalization. Neural Networks are dynamic systems which contain many simple processing units. Each of these units is called neuron.

The most popular neural network architecture is the multilayer feed-forward network of sigmoidal computing units (Lampinen, 1997). This study is based on application of multilayered ANNs which contains an input layer, a hidden layer and an output layer. Information flows from input to hidden and then through output layer. Multilayered ANN in which information flows forward from input layer to output layer is called multilayered feed-forward ANN. Connections are only between adjacent layers and there is no connection between neurons in the same layer (Fundamentals of Back-propagation Algorithm, HELP Topic of MATLAB 7.0).

A neuron is a simple processing node which calculates the output O according to an input I. The value of input I for neuron i outside of input layer is the weighted sum of all outputs of neurons in previous layers (Eq. 1).

(1)

The value of output is calculated according to a sigmoidal threshold function as follows (Eq. 2):

(2)

As the structure and the rules of feed-forward flow are defined, the network should go under training. The structure and output function do not change during training. Therefore training comprises the process of initializing the weights W (which are the only free parameters in network and are also the connection between neurons) so that the error between the computed output and the desired output for all samples is minimized. Therefore minimization of error between calculated output Oáical and desired output Oáides for all α samples is a standard optimization problem which is called the back propagation of errors into the network. For error minimization, the least squares equation has been selected as a basic equation (Eq. 3):

(3)

Where:
α = Runs over number of samples
I = Runs over number of outputs

The least squares optimization problem can be solved using different methods. Among these methods, quasi-Newton algorithm is found to be appropriate. In optimization mathematics this method is a well known algorithm for finding local maxima and local minima of functions like Newton’s method but it approximate the inverse Hessian matrix for accelerating the iteration. BFGS method is one of the most popular quasi-Newton algorithms. In mathematics, the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is a method to solve an unconstrained nonlinear optimization problem.

The principal idea of the method is to construct an approximate Hessian matrix of second derivatives of the function to be minimized, by analyzing successive gradient vectors. This approximation of the function’s derivatives allows the application of a quasi-Newton fitting method in order to move towards the minimum in the parameter space. The advantage of using quasi-Newton algorithm is its rapid convergence and consequently reduced time and costs of computation (Faster Training, Help topic, MATLAB 7.0). This algorithm is also suitable for function approximation (modeling).

When the weights corresponding to the minimum errors have been calculated and desired outputs for all samples have been obtained, the network is considered to be trained and training stage is stopped. This network can be applied for mapping input space into output space which in this study are considered to be picked travel times (R inputs) and corresponding velocity and dip respectively (α output) (Fig. 1). In Fig. 1 an elementary neuron with R inputs is shown. Each input is weighted with an appropriate w. The sum of the weighted inputs and the bias forms the input to the transfer function f. Neurons may use any differentiable any transfer function f to generate their output.

After training (i.e., minimization of errors) the network should be tested against known inputs. The ANN response to these known inputs is evaluated in terms of accuracy of the simulation. Afterwards the ANN is ready for generalization which basically deals with introduction of unknown input values.

Fig. 1: Simplified algorithm employed in ANN process

Fig. 2: The overall ANN procedure flowchart

However, this set of input data can already have a corresponding set of known outputs which can be compared to ANN response. The overall ANN procedure is described in Fig. 2.

Synthetic data generation: A number of dipping layer earth models were considered to be the case in this study. These include the following dipping layered models, the presumed characteristics of which are summarized in Table 1:

One dipping layer.
Three dipping layers with the same dip and same direction.
Three dipping layers with the different dips and same direction.
Two dipping layers with the same dips and different direction.

Considering the assumed data as described in Table 1, corresponding travel times were calculated by a forward modeling scheme using RAYINVR software (Zelt and Smith, 1992; Zelt, 1993). These corresponding data sets (i.e., two way travel times as input and velocity and dip as output) were fed to ANN in order to train the networks.

Table 1: Characteristics of the dipping layered models employed for the ANN application

The task of the feed forward ANN is to map Common Midpoint (CMP) gathers at control locations along a 2-D seismic line into seismic velocities within predefined velocity search limits (Calderon-Macias et al., 1998). Afterwards the ANN was tested using part of these data sets. As the final steps, inverse modeling was performed using the trained and tested ANN. This procedure (i.e. generalization) calculates the model characteristics (velocity and dip) corresponding to unknown input travel times. The overall procedure of the study is broken down into two main stages as described in Fig. 3 and 4.

The performance goal for all ANN applications was set to 10E-5. In other words, the generalization performance is considered accurate for different models, when this goal (performance goal error) is achieved.

The networks used for the four earth models consisted of three layers (i.e. input, hidden and output layers) with different number of neurons in each layer. Determination of number of hidden layers and associated neurons in ANN structure is usually done based on experience in accordance with the nature of the given problem. However in this study, one hidden layer was assumed to be sufficient for the function approximation application. The number of neurons in the three layers of ANN structure based on which the number of samples required for training were determined are summarized in Table 2. For all the samples, different number of geophones was considered while the spacing between the geophones was set to 0.1 km. The maximum horizontal offset was set to 3.6 km for a sample CMP gather with 16 traces.

Fig. 3: Procedure flowchart employed for forward modeling

Fig. 4: Procedure flowchart employed for inverse modeling

Table 2: Characteristics of the training structure of the ANN

RESULTS AND DISCUSSION

To evaluate the performance of the ANN for the four differently layered dipping earth models, the critical parameters were examined. These include; number of iterations, performance function values in the training stage as well as errors in velocity and dip calculations in generalization stage. The number of iterations can be of importance in terms of the time required for training. In case of real and complicated earth models, the required time of computation may have an influence on the applicability of the analysis method.

More importantly, the errors between the calculated and desired values of velocity and dip in the generalization stage of ANN application can be used for evaluation of the accuracy of the determinations. Accordingly, these error values were calculated for the four earth models, the results of which are presented in Table 3.

One dipping layer: Application of ANN to dipping layers is considered a first attempt in inversion of seismic data since the previous studies were found to be based on flat reflectors to the extent reviewed.

Table 3: The results of error values obtained for four earth models

For this model we calculated 790 data sets with different velocities and dips. The CMP gathers have been provided for these 790 samples with 0.1 km receiver separation for 16 geophone position according to source location. Therefore the maximum offset for the last geophone is 3.6 km in each CMP gather. The maximum x (i.e., horizontal) coordinate for velocity model was considered to be 15 km. The travel times were computed for each receiver position at different depths and velocities according to the dip of this layer. The convolved seismic trace corresponded to a Ricker wavelet. The only parameter of importance were considered to be the travel time, therefore each reflection was assigned one unit amplitude. After computing travel times for 790 examples (using CMP gather) the ANN was structured.

In this example the total number of weights or connection between neurons is determined to be (16x10)+(10x2) = 180 between the different layers of neural network. For this network at least 360 data (i.e., 270 data for training and 90 data for testing and generalization) were needed.

After training the ANN with quasi-Newton training algorithm for 715 examples, the results were obtained in (Fig. 5-7). The velocities were normalized to 3.7 km sec–1 and the dips to 20 degrees. After 4000 iterations the performance function was stabilized at 0.00179084 in (Fig. 5). Although the performance function is about two orders of magnitude greater than the preset performance goal (i.e., 1e-005), the results seemed to be satisfactory and accurate. The calculated velocity for 77 data sets versus desired velocities that were not included in training procedure is presented in (Fig. 6). The generalization capability of ANN shows that the calculated velocity for these examples is fair enough compared with desired velocity for these unknown data sets.

The maximum error between desired and calculated velocity is 0.063185 km sec–1 and the negative maximum error is -0.04478 km sec–1.

Fig. 5: The performance function of ANN after 4000 iterations for one dipping layer

Fig. 6: Desired velocity vs. calculated velocity for one dipping layer

The average error between desired and calculated velocity is -0.0017 km sec–1. Accordingly, it can be concluded that the ANN has been adequately trained. The dip calculation by ANN also shows fairly acceptable errors for these 77 data samples as described in (Fig. 7). The maximum and negative maximum errors between desired and calculated dips were 3.4728° and -2.8186°, respectively. The average error shows that the dip is also calculated accurately for this one dipping layer earth model.

Fig. 7: Desired dip vs. calculated dip for one dipping layer

Three dipping layers with the same dip and same direction: The second approach in this study was considered to be three dipping layers reflectors with same dip. The first layer's velocity was set constant (i.e., 1.5 km sec–1) but the velocities of the second and third layers varied between 1.6 and 2.52 (km sec–1). All layers have the same dip starting from 2° to 20°. For this geological model we calculated 770 data sets with different velocities, depths and dips. The CMP gathers have been provided for these 770 samples with 0.1 km receiver separation for 6 geophone position according to source location. The maximum offset is 15 km. Different synthetic seismograms for these earth models simulate CMP gathers with 8 traces. The convolved seismic trace corresponds to a Ricker wavelet with a central frequency of 22 Hz. It is worth noting that again as the only parameter which is important is travel times data so that each reflection is assigned unit amplitude.

After computing travel times for 770 examples (using CMP gather) it is time to build the neural network structure.

The feed-forward Neural network consists of three layers (input, output and hidden layers). The input layer has 24 neurons (8 traces for each layer), the output layer has 4 neurons (velocities for each layer and a constant dip for three layers) and the hidden layer consists of 12 neurons. In this example the weights or connection between neurons is (24x12)+(12x4) = 336 connection between the different layers of neural network. For this network we need approximately 720 data (540 data for training and 180 data for testing and generalization). First we trained our network with quasi-Newton training algorithm for 688 examples. The velocities are normalized to 3 km sec–1 and the dips to 20°. After 4000 iterations the performance function received down to 0.000146351 which is not so close to the goal (1e-005) as it can be seen in (Fig. 8). As we mentioned earlier it is better to use more data for training for getting better results in generalization procedure.

Fig. 8: The performance function of ANN after 4000 iterations for three dipping layers with the same dip and same direction

Fig. 9: Desired velocity vs. calculated velocity for three dipping layers with the same dip and same direction

Therefore we used maximum amount of the data for training. The results of simulating the outputs for 77 new input patterns for velocity estimation is presented in (Fig. 9). The velocity calculation shows relatively high accuracy. The average velocity error is increasing (0.053295 km sec–1) by increasing the number of layers. But this ANN is still well generalized and it can calculate the velocity values for new input patterns accurately. The desired dip versus calculated dip is presented in (Fig. 10). The average dip error between desired and calculated dip is not actually changing from first to third layer.

Three dipping layers with different dips and same direction: The third approach is to consider a geological model of three dipping layers with different dips but in the same direction. The velocity of first layer is constant (1.5 km sec–1) and the velocities of second and third layers are varying between 1.60 and 2.52 km sec–1 and dip ranges from 2 to 25° as described in Table 1.

Fig. 10: Desired dip vs. calculated dip for three dipping layers with the same dip and same direction

For this geological section with such properties we calculated 770 data sets with different velocities, depths and dips. The CMP gathers have been provided for these 770 samples with 0.1 km receiver separation for 8 geophone position according to source location. The maximum x-coordinate is 15 km. The travel times were computed for each receiver position for different depths and different velocities according to dip of these layers. After computing travel times for 770 examples (using CMP gather) it is time to build the neural network structure.

The feed-forward Neural network consists of three layers (input, output and hidden layers). The input layer has 24 neurons (8 traces for each layer), the output layer has 6 neurons (velocity and dip values for three layers) and the hidden layer consists of 12 neurons. In this example the weights or connection between neurons is (24x12) + (12x6) = 360 connection between the different layers of neural network. For this network we need approximately 720 data (540 data for training and 180 data for testing and generalization). After constructing the ANN structure it is time to train the network with maximum amount of data available for training. We trained our network with quasi-Newton training algorithm for 693 examples. The velocities are normalized to 2.6 km sec–1 and the dips to 25°.

After 4000 iterations the performance function approached down to 5.25489e-005. The performance function is presented in (Fig. 11). For simulating the network for unknown data we generalize this network for 77 examples which have been excluded from the training procedure. The plot of desired velocity vs. calculated velocity is presented in (Fig. 12). The average error value for velocity estimation is going higher when the number of layers is increasing.

The results of calculated dips for 77 examples excluded from the training data is presented in (Fig. 13). This shows that the dip value has been calculated so well for the inputs that have been excluded from training procedure.

Fig. 11: The performance function of ANN after 4000 iterations for three dipping layers with different dip and same direction

Fig. 12: Desired velocity vs. calculated velocity for three dipping layers with different dip and same direction

Fig. 13: Desired dip vs. calculated dip for three dipping layers with different dip and same direction

The average error value for estimating dips for unknown input patterns is -0.11309 degrees which again it does not depend to the number of layers.

Two dipping layers with the same dips and different direction: The last approach considers two dipping layers with the same dip but in different direction. In this model the velocity of first layer is constant (1.5 km sec–1) and the velocity of second layer is varying between 1.60 and 1.96 km sec–1. The depths are between 0.30 and 5.30 km. The dips vary between 2 and 8 degrees. For this geological section with such properties we calculated 572 data sets with different velocities, depths and dips. The CMP gathers have been provided for these 572 samples with 0.1 km receiver separation for 12 geophone position according to source location. The maximum x-coordinate is 15 km. The travel times can be computed for each receiver position for different models. After computing travel times for 572 examples (using CMP gather) it is time to build the neural network structure. The feed-forward Neural network consists of three layers (input, output and hidden layers). The input layer has 24 neurons (12 traces for each layer), the output layer has 4 neurons (velocity and dip values for two layers) and the hidden layer consists of 12 neurons. In this example the weights or connection between neurons is (24x12) + (12x4) = 336 connection between the different layers of neural network. For this network we need approximately 672 data (504 data for training and 168 data for testing and generalization).

After constructing the ANN structure we trained our network with quasi-Newton training algorithm for 506 examples. The velocities are normalized to 2 km sec–1 and the dips to 8 degrees. After 4000 iterations the performance function received down to 1.00663e-006 which is the same as our goal. The performance function is presented in (Fig. 14). The plot of desired velocity vs. calculated velocity is presented in (Fig. 15). The results of generalization of ANN for the same data for different dip values are presented in (Fig. 16). The dip value has been calculated so well for the inputs that have been excluded from training procedure. The average dip error is -0.00977 and 0.007096 for first and second layer, respectively. As the maximum offset with 0.1 km geophone separation is 1.2 km for 12 traces, therefore the error value in depth calculation for this offset is 0.0001486 km or 0.14 m.

Comparison to inversion algorithm: In order to asses the capability of the ANN methodology to conventional inversion analysis, the case of three dipping layers with different dips and same directions as described in pervious section was compared to the results of RAYINVR program (Zelt, 1993). The comparison results described in (Fig. 17, 18) indicated that both the velocity and dip errors associated with RAYINVR program were significantly higher than those of ANN when the number of rays is not sufficient.

Fig. 14: The performance function of ANN after 4000 iterations for two dipping layers

Fig. 15: Desired velocity vs. calculated velocity for two dipping layers

Fig. 16: Desired dip vs. calculated dip for 77 unknown examples for two dipping layers

Furthermore, the comparison showed that the errors of both velocity and dip increased with depth of the earth model. In other words, the highest errors were associated with the deepest layers of the sample earth model. However, the considerable increase of errors with depth of layers as calculated for RAYINVR program results was found to be far greater than that of ANN results.

Fig. 17: Comparison between RAYINVR and ANN results for velocity errors

Fig. 18: Comparison between RAYINVR and ANN results for dip errors

Accordingly it could be concluded that the large number of data included in the training process of ANN provides a good correlation between the input and output data even for small number of rays (for example 8 traces for each layer), whereas the RAYINVR program requires a more accurate primary model obtained with ray tracing method in order to lead to better results.

CONCLUSION

After these different approaches it is obvious that the well trained neural networks not only can compute the right outputs for different input patterns but also can predict the right output patterns for inputs which were not included in training procedure. In more complicated cases such as dipping layers especially layers with different dips the neural network successfully predict reliable outputs for new as well as old values. These all synthetic models show that inversion of seismic data can be done for dipping reflectors with the use of ANN approach. Of course these results have been concluded from synthetic data and have to be verified using real seismic data. The real seismic data should cover the characteristics of these different synthetic models. A more reliable judgment on the applicability of neural networks to inversion of seismic data is expected that upon carrying out this part of the study where real field data is to be employed.

It is better to use different velocity gradients instead of constant interval velocity values for each layer.

REFERENCES

  • Al-Yahya, K., 1989. Velocity analysis by iterative profile migration. Geophysics, 54: 718-729.


  • Bradley, M.E., 2003. Practical Seismic Interpretation. International Human Resources Development Cooperation, Boston, pp: 266


  • Calderon-Macias, C., M.K. Sen and P.L. Stoffa, 1997. Hopfield neural networks and mean field annealing for seismic deconvolution and multiple attenuation. Geophy. Prospect., 62: 992-1002.
    Direct Link    


  • Calderon-Macias, C., M.K. Sen and P.L. Stoffa, 1998. Automatic NMO correction and velocity estimation by a feedforward neural network. Geophysics, 63: 1696-1707.
    Direct Link    


  • Calderon-Macias, C., M.K. Sen and P.L. Stoffa, 2000. Artificial neural networks for parameter estimation in geophysics. Geophys. Prospect., 48: 21-47.
    Direct Link    


  • Dai, H. and C. Mac Beth, 1997. The application of back propagation neural network to automatic picking of seismic arrivals from single component recordings. J. Geophys. Res., 102: 15105-15113.
    Direct Link    


  • Fitzgerald, E.M. and C.J. Bean, 2001. Sub-basalt imaging problems and the application of artificial neural networks. J. Applied Geophys., 48: 183-197.
    Direct Link    


  • Lampinen, J., 1997. Advances in neural network modeling. Proceeding TOOLMET, Tool Environments and Development methods for Intelligent Systems, pp: 28-36.


  • Langer, H., G. Nunnari and L. Occhipinti, 1996. Estimation of seismic waveform governing parameters with neural networks. JGR, 101: 20109-20118.
    Direct Link    


  • Murat, M.E. and A.J. Rudman, 1992. Automated first arrival picking: A neural network approach. Geophy. Prospec., 40: 587-604.


  • Nikravesh, M. and F. Aminzadeh, 2001. Past, present and future intelligent reservoir characterization trends. J. Pet. Sci. Eng., 31: 67-79.
    CrossRef    Direct Link    


  • Poulten, M.M., B. Sternberg and C.E. Glass, 1992. Location of subsurface targets in geophysical data using neural networks. Geophysics, 57: 1534-1544.


  • Roeth, G., 1993. Application of neural networks to seismic inverse problems. Ph.D Thesis, Pour l'obtention du Diplome de Doctorat de l'Universite Paris, pp: 7.


  • Roeth, G. and A. Tarantola, 1994. Neural networks and inversion of seismic data. J. Geophy. Res., 99: 6753-6768.


  • Van der Baan, M. and C. Jutten, 2000. Neural networks in geophysical application. Geophysics, 65: 1032-1047.


  • Wang, L. and M. Mendel, 1992. Adaptive minimum prediction error deconvolution and source wavelet estimation using Hopfield neural networks. Geophysics, 57: 670-679.


  • Wu, C.H., R.D. Soto, P.P. Valko and A.M. Bubela, 2000. Non-parametric regression and neural-network infill drilling recovery models for carbonate reservoirs. Comput. Geosci., 26: 975-987.
    Direct Link    


  • Yilmaz, O., 1987. Seismic data processing. The Society of Exploration Geophysics.


  • Zelt, C.A. and R.B. Smith, 1992. Seismic traveltime inversion for 2-D crustal velocity structure. Geophys. J. Intl., 108: 16-34.

  • © Science Alert. All Rights Reserved