HOME JOURNALS CONTACT

Information Technology Journal

Year: 2006 | Volume: 5 | Issue: 2 | Page No.: 332-335
DOI: 10.3923/itj.2006.332.335
Neuro Computing for Micro Structural Analysis
Fahd Bilal ur Rehman, Shakeel . Ahmad and Malik Sikandar Hayat Khiyal

Abstract: This study was aimed to provide cost effective solution for the prediction of changes during the thermo mechanical process like rolling as well as to provide accurate and optimized information about the product properties such as grain size. The primary goal of neuro computing for micro structural analysis was to assists the metallurgist. A proper optimization of this process normally depends upon a combination of experience and expensive trials. The final microstructure was dependant on many parameters such as alloy composition, working temperature, local compression rate. Multi layer neural network was used with error back propagation as a learning algorithm. By using this technique it gave us optimized results.

Fulltext PDF Fulltext HTML

How to cite this article
Fahd Bilal ur Rehman, Shakeel . Ahmad and Malik Sikandar Hayat Khiyal, 2006. Neuro Computing for Micro Structural Analysis. Information Technology Journal, 5: 332-335.

Keywords: grain size, feed forward neural network, metallurgy, Back propagation and rolling

INTRODUCTION

The problem in the modeling of Neuro Computing for Micro Structural Analysis can be broadly stated as follows: Given a certain material which undergoes a specified Rolling process, what are the final properties of this material? Typical final properties in which we are interested are the micro structural properties, such as the mean grain size. A trial-and-error approach for solving this problem has often been taken in the materials industry, with different conditions attempted to achieve a given final product. The obvious drawback of this approach is that it takes very high financial cost and time. Another method is to develop a parameterized, physically motivated model and to solve for the parameters using empirical data[1]. However, the limitation with this approach is that in terms of the physical theory the micro structural evolution depends upon several intermediate microscopic variables, which have to be measured in order to apply the model. Some of these variables, such as dislocation density, are difficult and time-consuming to measure, making it impracticable to apply such an approach to large-scale industrial processes. Our approach to the prediction of rolled microstructure is therefore to develop an empirical model in which we define a parameterized, non-linear relationship between the micro structural variables of interest. For a given problem we use error back propagation neural network with momentum term. This model is capable of producing good results.

SAMPLE PREPARATION FOR TRAINING DATA SET

The Thermo-mechanical[2] process in which we are interested is rolling process to carry out rolling process we need a specimen. Metallographic specimen often has to cut out of large piece into manageable size and shape. It is important that no deformation of the material occurs during this process, so that the specimen can be directly grounded. Normally an abrasive cutting technique is used for a material sectioning because it gives less deformation. Metals require good cooling to prevent alteration of microstructure due to heat. In order to obtain smoothness the surface is usually subjected to grinding operation. Grinding bring down the roughness to approx 0.1 μm. Like grinding and polishing[3] is essential for smoothing the specimen surface. The process is carried out till mirror like surface is produced. In all metallographic preparatory work, cleanliness is important requirement. Before grinding and polishing impurities originally sticking to the specimen such as oil, grease and dust. Which must be removed, cleanliness is required after each step in preparation but it is especially applied on polishing stage. If we do not apply cleanliness than it will not give the true interpretation during the examination. Almost all polished surface reflect light equally. Therefore it is very difficult to distinguish visually. Thus it is necessary to develop a contrast. So to provide a contrast to a sample we use etching. After etching sample is ready for microstructure examination under a microscope.

Fig. 1: Rolled material 30% reduction at 450C

Fig. 2: Rolled material 60% reduction at 450C

Fig. 3: Rolled material 70% reduction at 450C

Grain size of the sample is determined by the intercept method[4]. In this method the grain size is estimated by counting of a ground glass screen, the numbers of grains intersected by a one or more straight line. Grains touched by the end of the line count only as half grain. The number of counts is made on at least three fields to assure a reasonable average. The length of the line in millimeters divided by average number of grains intersected gives a grain diameter or grain size. For the training of the network we have collected some rolled AA-2024 micrographs, which are presented in Fig. 1-3.

NEURO COMPUTING MODEL

We have used the three layers Feed Forward Neural Network[5] and training of the designed model is carried out using the error-back-propagation algorithm[6]. The goal of proposed model is to minimize iteration.

(1)

So, the weight change with respect to the network weights, such that the change in the network weights at any point in the training is can be obtained by:

(2)

The network is trained iteratively and the weights are perturbed towards their optimal values after each pass of the data or a subset of it. The cycle between weight updates is a single iteration. Thus the general equation for weight updates between two layers, indexed by i and j is

(3)

Where, Wij , is the weight connecting node i in one layer to node j in the next and the value in the superscript is used to represent layer no and η is a constant. First consider the case for weight update for the output layer of multilayer Neural Network. Apply Chain Rule on Eq. 3.

(4)

Where,

Suppose,

(5)

Apply Chain Rule Again on Eq. 5 we get

(6)

Suppose,

(7)

(8)

Substitute the Value of Eq. 8 in Eq. 4 we get

(9)

Equation 9 is used to change the weight for output layer.

The rate of convergence towards the minimum of E can be improved by addition of a momentum term. This term is simply the previous weight update and forces the present weight update to continue in the general direction of a negative gradient. Rather than respond only to individual updates of w which may leave the function trapped in a local minimum. In this sense the term adds 'inertia' to the gradient descent. Although the use of a momentum term is not the most efficient means of converging on the global minimum in the shortest possible training time, it nonetheless improves training times over pure gradient descent is quite robust and helps to avoid local minima in the error minimization surface. A faster approach is to search for minima along the conjugate gradients[7]. But this method is even more prone to becoming stuck in local minima[8]. So the weight update equation for output layer is

(10)

The constants η and α determine the relative contribution of the update momentum terms as well as the overall magnitude of the weight updates.

Now consider the weight updating for hidden layer of the Artificial Neural Network. In this case error calculation is not as simple as in the output layer because hidden layer error depends upon output layer errors whereas the output error is calculated as the mean square of the expected and the calculated output of the neuron.

(11)

Whereas Oi is the output of the previous layer and input to the current layer

(12)

Apply chain rule again on Eq. 12

(13)

Where,

So,

(14)

First consider the term from Eq. 14 and apply chain rule

(15)

Equation 15 can also be written as:

(16)

By Eq. 5

(17)

So substitute the value of Eq. 17 in Eq. 16

(18)

So the error for hidden layer by substituting the Eq. 18 in Eq. 14

(19)

(20)

We get weight update formula for hidden layer. it is unlike that the network weights would converge exactly on their optimal value in a finite number of iteration. Although with sufficient training they may become arbitrarily close[9]. For the software implementation of neuro computing model we are using Eq. 10 and 20[10] software implementation takes three steps:

Step 1: Initalize the network weight
Step 2: Calculate the output of each neuron
Step 3: Compare the actual output with the desired output and calculate the δ for all layers.

RESULTS

The model was trained using a set of 8 data pairs with reduction ratio from 10 to 80% as shown in Table 1.

Once the model is trained it is used to produce prediction of grain sizes for unknown parameters as well. A plot is drawn for the comparison between the experimental results and the predicted results.

Table 1: Training vector collected during experimental work

Fig. 4: Result of the prediction model

As the Fig. 4 shows that the prediction of the model is very close to produce experimental values. The average error between predicted and experimental values is 6-7%, which is much less than the noise in experimental measurement of grain size. This model is very flexible for the industry where frequent microstructure analysis is carried out.

We have tested our model for the prediction of known as well as unknown vector it gives us an optimized results (Fig. 4).

REFERENCES

  • Furu, T., H.R. Shercliff, C.M. Sellars and M.F. Ashby, 1996. Physically-based modelling of strength, microstructure and recrystallisation during thermo mechanical. Proc. Al-Mg Alloys Mater. Sci. Forum, 217: 453-458.


  • Metal Hand Book, 1995. Metallography and Microstructures. 9th Edn., American Society of Materials, USA


  • Manchanda, V.K., 1995. Material and Matallography. Khana Publication, New Dehli, India


  • Bennett, E.G. and B. Rocbuck, 1999. WC grain size measurment. NPL Good Practice Guide No. 22.


  • Gupta, R., H.P. Gupta and C.A.L. Bailer-Jones, 2001. Automated Data Analysis in Astronomy. Naorsa Publication House, New Delhi, India, pp: 51-68


  • MacKay, D.J.C., 1995. Probable network and plausible predictions: A review of practical Bayesian methods for supervised learning. Network Comput. Neural Syst., 6: 469-505.
    Direct Link    


  • Gavard, L., H.K.D.H. Bhadeshia, D.J.C. MacKay and S. Suzuki, 1996. Bayesian neural network for austenite formation in steel. Mater. Sci. Technol., 12: 453-453.
    Direct Link    


  • Gibbs, M.N., 1997. Bayesian guassian process for regression and classification. Ph.D. Thesis, University of Cambridge.


  • Rumelhart, D.E., G.E. Hinton and R.J. Williams, 1986. Learning representations by back-propagating errors. Nature, 323: 533-536.
    CrossRef    


  • Roa, V.B. and H.B. Roa, 1996. Neural Network and Fuzzy Logic. MIS Press, A Subsidiary of Henry Holt and Co., New York, USA

  • © Science Alert. All Rights Reserved