HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2008 | Volume: 8 | Issue: 17 | Page No.: 3038-3043
DOI: 10.3923/jas.2008.3038.3043
Artificial Neural Network Analysis of Springback in V Bending
M. Bozdemir and M. Golcu

Abstract: The aim of study is to define the springback angle with minimum error using the best reliable ANN training algorithm. Training and test data were obtained from experimental studies. Materials, bending angle and r/t have been used as the input layer; springback angle has also been used as the output layer. For testing data, Root Mean Squared-Error (RMSE), the fraction of variance (R2) and Mean Absolute Percentage Error (MAPE) were found to be 0.003, 0.9999 and 0.0831%, respectively. With these results, we believe that the ANN can be used for prediction of analysis of springback as an appropriate method in V bending.

Fulltext PDF Fulltext HTML

How to cite this article
M. Bozdemir and M. Golcu, 2008. Artificial Neural Network Analysis of Springback in V Bending. Journal of Applied Sciences, 8: 3038-3043.

Keywords: Artificial neural networks, springback and V bending

INTRODUCTION

Sheet-metal forming processes, such as bending, stretching and drawing are widely applied industrially, but design of tools and selection of sheet material remain almost invariably dependent on trial and error. The main reason is that the shape of tolls, characteristics of material, process variables and the geometrical configuration of the workpiece all influence the manufacturing process; these characteristics are difficult to formulate into a precise mathematical model. Sheet-metal bending is one of the most widely applied sheet forming operations. Although, the process is simple, the bending operation present several technical problems is production, such as prediction of spring-back or spring-forward after forming for die design (Huang and Leu, 1998).

Springback, the elastically-driven change in shape of a part upon unloading after forming, is a growing concern as manufacturers increasingly rely on materials with higher strength-to-modulus ratios than the traditional low-strength steel (Carden et al., 2002). Springback can be minimized by using suitable die designs, but can not be eliminated. One of the most important problems in the die design is to minimize the springback (Tekiner, 2004). For a given set of tooling, springback is influenced by sheet thickness and material properties, inconsistency of which leads to inconsistent springback. Several methods are successfully employed to restrict springback like adaptive control in the machine tool or suitable tool design (bottoming, overbending, etc.) (Inamdar et al., 2004).

A number of researchers have studied the analytical models based on the material properties, effect of tool geometry on springback and finite element analyses used to predict springback (Stelson and Gossard, 1982; Wang et al., 1993; Perduijn and Hoogenboom, 1995; Wang et al., 1993; Perduijn and Hoogenboom, 1995; Leu, 1997; Huang and Leu, 1998). Palaniswamy proposed to a conventional optimization method combined with finite element method was used to obtain optimum blank dimensions that can reduce springback (Palaniswamy et al., 2004). Karafillis and Boyse (1996) developed a deformation transfer function for changing the shape the tool to compensate for springback in sheet metal forming using finite element method.

ANNs have been studied for many years in the hope of achieving human-like performances in solving problems that are generally ill defined and that require a great amount of processing. Human brain carries out this using millions of neurons working together. Similarly, an ANN consists of many computational elements, operating in parallel, connected by links with variable weights that are typically adapted during the learning process. Developed of detailed mathematical models began in the 1960s but only in recent years have the improvements in the science of ANNs allowed the development of manufacturing applications (Forcellese et al., 1998).

In this study, an ANN is used to determine the effects of material, bending angle and r/t on springback angle. Training data are used from experimental study. Pattern numbers of 224 have been obtained from the experiments. Material, bending angle and r/t has been used as the input layer; the springback angle has also been used as the output layer.

The results of the system indicate a relatively good agreement between the predicted values and the experimentally ones. Experimental study to determine springback angle in V bending is complex time consuming, costly.

Fig. 1: (a) Bending die and (b) bending terminology

It also requires specific tools. To overcome these difficulties, an ANN can be used for prediction of springback angle in V bending.

Bending theory and springback: Bending is a flexible manufacturing process by which many different shapes can be produced. Manufacturing process by which metal can be deformed by plastically deforming the material and changing its shape. Standard die sets are used to produce a wide variety of shapes. The material is stressed beyond its yield strength but below its ultimate tensile strength. The material is placed on the die and positioned in place with stops and/or gages. It is held in place with hold-downs. The upper part of the press, the ram with the appropriately shaped punch descends and forms the v-shaped bend. The terminology used in bending is shown in Fig. 1 (Tekiner, 2004).

Air bending is done with the punch touching the workpiece and the workpiece, not bottoming in the lower cavity. This is called air bending. As the punch is released, the workpiece ends up with less bend than that on the punch (greater included angle). This is called spring-back. The amount of spring back depends on the material, thickness, grain and temper. The spring back usually ranges from 5-10°. Usually, the same angle is used in both the punch and the die to minimize setup time. The inner radius of the bend is the same as the radius on the punch. Bottoming or coining is the bending process where the punch and the workpiece bottom on the die. This makes for a controlled angle with very little spring back. The tonnage required on this type of press is more than in air bending.

Fig. 2: Elastic springback

When a component is formed, the stamping tool bends the metal into a certain angle with a given bend radius. Once, the tool is removed, the metal will spring back, widening the angle and increasing the radius. The springback ratio is defined as the final angle after springback divided by the initial stamping angle (Fig. 2).

In order to understand springback, it is necessary to look at a material`s stress-strain curve. When a bend is being formed, the material is deliberately over-stressed beyond the yield strength in order to induce a permanent deformation.

The permanent deformation will usually be less than the designer intended deformation of the strip with some exceptions. The springback will be equal to the amount of elastic strain recovered when the die is removed. It is also important to note that the stress is highest at the top and bottom surfaces of the strip and falls to zero at the neutral axis of the bend, roughly in the middle of the strip. Therefore, most of the stress in the interior of the strip is elastic and only the outer surfaces undergo yielding. The interior of the strip would like to straighten out the bend when the load is removed while the outer edges tend to resist straightening. The bend will not return to a zero stress state, but instead will spring back to an equilibrium point where all internal stresses balance. This is why forming operations induce residual stress in the material.

Several variables influence the amount of springback that is seen in a bend. A material with higher yield strength will have a greater ratio of elastic to plastic strain and will exhibit more springback than a material with lower yield strength. On the other hand, a material with a higher elastic modulus will show less springback than a material with a lower elastic modulus. In addition, the r/t ratio of the bend will come into play. A sharp bend will concentrate the stress more than a gradual bend, resulting in more plastic strain. Therefore, smaller r/t ratios will result in less springback. In this study, springback angle bring to the other inter values are predicted using springback data with formed four bending angle for different r/t steel sheet. Three different training algorithms are used for training and test data after occurred ANN structure. Training algorithm providing minimum error is reached via comparison of experimental and predicted results.

Using neural networks in springback prediction: Artificial intelligence consists of two major branches such as the study of ANNs and expert systems. During the last ten years there has been a substantial increase in the interest on ANNs. Neuron is the fundamental processing element of a neural network. An artificial neuron is model whose components have direct analogs to components of an actual neuron. ANNs have been used successfully in solving complex problems in various fields of engineering, economics, neurology, mathematics, medicine, meteorology and many others. Some of the most important ones are in pattern, sound and speech recognition, in the identification of explosives in passenger suitcases and in the identification of military targets (Kalogirou et al., 1999; Kalogirou et al., 1988; Chouai et al., 2002).

Neural networks operate like a >black box` model and do not require detailed information about the systems. On the other hand, they learn the relationship between the input parameters and the controlled and uncontrolled variables by studying previously recorded data, similar to the way a non-linear regression might perform. Another advantage of using ANNs is their ability to handle large and complex systems with many interrelated parameters. They seem to simply ignore exist data that are of minimal significance and concentrates instead on the more important inputs (Kalogirou, 2001).

The output of a specific neuron is a function of the weighted input, the bias of the neuron and the transfer function. Figure 3 shows the basic artificial neuron of the hidden layer.

Fig. 3: Presentation of a basic artificial neuron

A neural network consists of a number of neurons and in a typical network there are input layer, hidden layer or layers and output layer. In its simple form, each single neuron is connected to other neurons of a previous layer through adaptable synaptic weights. Knowledge is usually stored as a set of connection weights. The output of any neuron is given by:

(1)
Where:
(2)

The transfer function f can be selected from a set of readily available functions.

Training of the network was performed using Levenberg-Marquardt (LM), Scaled Conjugate Gradient (SCG) and Pola-Ribiere Conjugate Gradient (CGP) backpropagation algorithms. These algorithms iteratively adjust the weights to reduce the error between the measured and the expected outputs of the network. A training set is a group of matched input and output patterns used for training the network usually by suitable adaptation of the synaptic weights. The outputs are the dependent variables that the network produces for the corresponding input. It is important that all the information the network needs to learn is supplied to the network as a data set. When each pattern is read the network uses the input data to produce an output which is then compared to the training pattern (the correct or expected output). If there is a difference, the connection weights are altered in such a direction that the error is decreased. After the network has run through all the input patterns if the error is still greater than the maximum desired tolerance, the ANN runs again through all the input patterns repeatedly until all the errors are within the required tolerance. If the training reaches a satisfactory level, the network holds the weights constant and uses the trained network to make predictions of the output parameters in new input data sets not used to train it (Bechtler et al., 2001; Gölcü, 2006).

Gradient descent and gradient descent with momentum are generally slower for practical problems because of requiring small learning rates for stable learning than the other algorithms. Moreover, success in the algorithms depends on the user-dependent parameters learning rate and momentum constant. Algorithms such as Conjugate Gradient (SCG), BFGS quasi-Newton and Levenberg-Marquardt (LM) are faster algorithms than the other algorithms and use standard numerical optimization-techniques. An ANN with a back propagation algorithm learns by changing the weights and these changes are stored as perception information, or knowledge. The error is described by the Root-Mean-Squared Error (RMSE) and defined as follows:

(3)

In addition, the absolute fraction of variance (R2) and mean absolute percentage error (MAPE) are defined, respectively, as follows:

(4)

and

(5)

where, p is the predicted value, m is the measured value; n is the pattern number (Bechtler et al., 2001). The used ANN structure of a multi-layer is shown in Fig. 4. It consists of three input layers, one hidden layer and one output layer. The examples in this study are numerical values performed by using the experimental results and 224 patterns were obtained from the experiments. Here, an ANN model was used to predicted of springback angle in V bending. Inputs for the network are material, bending angle and r/t; the output is springback angle.

The experimental results were used to train and test. It was used 188 experimental results, from the total of 224, as data sets to train the network, while 36 results were used as test data. The architecture of the ANN becomes 3-13-1, 3 corresponding to the input values, 13 for the number of hidden layer neurons and 1 for the output. The back-propagation learning algorithm has been used in feed-forward, single hidden layer. Variants of the algorithm used in the study are Levenberg-Marquardt (LM) and Scaled Conjugate Gradient (SCG) and Pola-Ribiere Conjugate Gradient (CGP) algorithms (Gölcü, 2006).

Fig. 4: ANN architecture used for 13 neurons in a single hidden-layer

Table 1: Sample data sets used for training and testing

The selected neural network architecture consists of one hidden layer of log-sigmoid neurons followed by an output layer of one linear neuron. Linear neurons are those which have a linear transfer function. Transfer function is purelin.

A computer program has been performed under MATLAB 6.5. In the training, it is used an increased number of neurons (from 12 to 16) in a single hidden-layer. When the network training was successfully finished, the network was tested with test data. Some statistical methods, R2, RMSE and MAPE values have been used for comparison. It has been shown selected some sample data sets used for training and testing the network in Table 1.

RESULTS AND DISCUSSION

Numerical results obtained from experimental and the related parameters have been used to train the network. The material, bending angle, r/t and springback angle have been used to train the network. Initially, twelve hidden neurons in a single hidden-layer have been used for all the algorithms. Then, the number of neurons has been increased.

Table 2: Error values of the ANN approach for springback used in training and testing

Fig. 5: Measured and ANN predicted test data results of springback angle

The results revealed that, the optimum hidden number is different for different algorithms. In this study, the fastest learning is obtained with the LM algorithm. SCG is also fast, but it produces more errors compared to errors with LM. Moreover, the highest MAPE is obtained with CGP for all hidden numbers in studied all algorithms. The error values of SCG algorithm are usually between those of the LM and CGP.

Statistical values such as RMSE, R2 and MAPE of springback angle are given in Table 2 for different training algorithms and hidden number neurons. The LM algorithm with 13 neurons has produced the best results.

It is observed that MAPE is 0.0831% in the testing and 0.3326% in the training; R2 is 0.9999 in the testing and 0.9999 in the training; the RMSE value is 0.003 in the testing and 0.0021 in the training. The worst results are observed with the CGP algorithm compared to the other two algorithms as shown in Table 2. The highest MAPE is found to be 8.0965% for network of 16 hidden numbers.

Comparison of the measured and predicted springback values are shown in Fig. 5 for the test data, the values predicted by ANN are very close to measured values.

Fig. 6: Effect of the number of neurons in the hidden layer on the MAPE

Fig. 7: Comparison of measured and predicted springback angle results for testing data

Figure 6 shows the effect of the number of neurons in the hidden layer on the MAPE. The training epoch for each neural network is 30000.

It is shown that the training error is minimized when 13, 14 and 16 neurons are used for LM, CGP and SCG algorithm respectively. Thus, these ANN models with minimum errors are adopted for further studies. As shown Fig. 7, the developed ANN gives a very accurate representation of R2 values over the all range or working conditions.

CONCLUSION

The aim of this study was to investigate the effect of different materials, bending angle and r/t on springback angle in V bending using the neural networks. Analysis of ANN for different materials (C20, C25, C35) having different bending angle (30, 60, 90 and 1201) and r/t were performed in V bending.

This study introduced ANNs technique for modeling the springback angle in V bending. It was used 188 results as data sets to train the network while 36 results were used as test data from the total experimental results of 224. LM, SCG and CGP algorithms have been studied and the best results were obtained from LM algorithm with 13 neurons. For testing data, root mean squared-error (RMSE), the fraction of variance (R2) and mean absolute percentage error (MAPE) were found to be 0.003, 0.9999 and 0.0831%, respectively. So, these ANN predicted results can be considered within acceptable limits. The results show good agreement between predicted and measured values.

NOMENCLATURE

ANN : Artificial neural-network
af : Angle after springback
as : Bending angle
CGP : Pola-Ribiere conjugate gradient
LM : Levenberg-Marquardt
MAPE : Mean absolute percentage error
m : Measured value
n : Pattern
R2 : Absolute fraction of variance
RMSE : Root-mean-squared error
r : Bend radius
rf : Radius after springback
rs : Stamping radius
SCG : Scaled conjugate gradient
p : Predicted value
t : Sheet thickness

REFERENCES

  • Bechtler, H., M.W. Browne, P.K. Bansal and V. Kecman, 2001. New approach to dynamic modeling of vapour-compression liquid chillers: Artificial neural networks. Applied Energy, 21: 941-953.
    CrossRef    Direct Link    


  • Carden, W.D. L.M. Geng, D.K. Matlock and R.H. Wagoner, 2002. Measurement of springback. Int. J. Mech. Sci., 44: 79-101.
    CrossRef    Direct Link    


  • Chouai, A., S. Laugier and D. Richon, 2002. Modeling of thermodynamic properties using neural networks: Application to refrigerants. Fluid Phase Equilibria, 199: 53-62.
    CrossRef    


  • Forcellese, A., F. Gabrielli and R. Ruffini, 1998. Effects of the training set size on springback control by neural network in an air bending process. J. Mater. Process. Technol., 80-81: 493-500.
    CrossRef    


  • Gölcü, M., 2006. Artificial neural network based modeling of performance characteristics of deep well pumps with splitter blade. Energy Convers. Manage., 47: 3333-3343.
    CrossRef    


  • Huang, Y.M. and D.K. Leu, 1998. Effects of process variables on v-die bending process of steel sheet. Int. J. Mech. Sci., 40: 631-650.
    CrossRef    


  • Inamdar, M.N., P.P. Date and U.B. Desai, 2000. Studies on the prediction of springback in air vee bending of metallic sheets using an artificial neural network. J. Mater. Proces. Technol., 108: 45-54.
    CrossRef    


  • Kalogirou, S.A, C.S. Neocleous and C.N. Schizas, 1988. Artificial neural networks for modeling the starting-up of a solar, steam-generator. Applied Energy, 60: 89-100.
    CrossRef    


  • Kalogirou, S.A., S. Panteliou and A. Dentsoras, 1999. Modeling of solar domestic water heating systems using artificial neural networks. Solar Energy, 65: 335-342.
    CrossRef    


  • Kalogirou, S.A., 2001. Artificial neural networks in renewable energy systems applications: A review. Renewable Sustainable Energy Rev., 5: 373-401.
    CrossRef    


  • Karafillis, A.P. and M.C. Boyse, 1996. Tooling and binder design for sheet metal forming process compensating springback error. Int. J. Mach. Tools Manuf., 36: 503-526.
    CrossRef    


  • Leu, D.K., 1997. A simplified approach for evaluating bendability and springback in plastic bending of anisotropic sheet metals. J. Mater. Process. Technol., 66: 9-17.
    CrossRef    


  • Palaniswamy, H., G. Ngaile and T. Altan, 2004. Optimization of blank dimensions to reduce springback in the flexforming process. J. Mater. Process. Technol., 146: 28-34.
    CrossRef    


  • Perduijn, A.B. and S.M. Hoogenboom, 1995. The pure bending of sheet. J. Mater. Process. Tech., 51: 274-295.
    CrossRef    


  • Stelson, A.K. and D.C. Gossard, 1992. An adaptive press brake control using an elastic-plastic material model. Trans. ASME J. Eng. Ind., 104: 389-393.
    Direct Link    


  • Tekiner, Z., 2004. An experimental study on the examination of springback of sheet metals with several thicknesses and properties in bending dies. J. Mater. Process. Technol., 145: 109-117.
    CrossRef    


  • Wang, C., G. Kinzel and T. Altan, 1993. Mathematical modeling of plane strain bending of sheet and plate. J. Mater. Process. Technol., 39: 279-304.
    CrossRef    

  • © Science Alert. All Rights Reserved