HOME JOURNALS CONTACT

Journal of Applied Sciences

Year: 2014 | Volume: 14 | Issue: 17 | Page No.: 1990-1995
DOI: 10.3923/jas.2014.1990.1995
Research on Grain Yield Prediction Method Based on Improved PSO-BP
Liguo Zhang, Jiangtao Liu and Lifu Zhi

Abstract: Aimed at the highly nonlinear and uncertainty of grain yield changes, a new method for grain yield prediction based on improved PSO-BP is proposed. By introducing mutation operation and adaptive adjust of inertia weight, the problem of easy to fall into local optimum, premature, low precision and low later iteration efficiency of PSO are solved. By using the improved PSO to optimize BP neural network’s parameters, the learning rate and optimization capability of conventional BP are effectively improved. The simulation results of grain production prediction show that the predict accuracy of the new method is significantly higher than that of conventional BP neural network method and the method is effective and feasible.

Fulltext PDF Fulltext HTML

How to cite this article
Liguo Zhang, Jiangtao Liu and Lifu Zhi, 2014. Research on Grain Yield Prediction Method Based on Improved PSO-BP. Journal of Applied Sciences, 14: 1990-1995.

Keywords: back-propagation neural network, Particle swarm optimization and grain yield prediction

INTRODUCTION

Grain problem is a colossal major problems for a country. Although in recent years there has been a pattern of oversupply of grain, the grain security issue can not be ignored. As a large agricultural country, to protect and maintain the country’s grain security is particularly important. Rational analyzing and forecasting grain product capacity has important reference value for the setting and achieving grain security objectives. Many scholars, at home and abroad, have made much related research and constructed a number of very valuable theoretical hypothesis and prediction models (Ma and Feng, 2008; Nie, 2012; Li, 2009; Su et al., 2006; Wu and Li, 2002; Cheng and Liu, 2008). Su et al. (2006) has compared and analyzed forecasting performances of step regression, Back-propagation (BP) neural network and GM (1,N) gray system. Using non-linear artificial neural network BP model (Wu and Li, 2002) made corn production prediction in china. Based on the previous data (Cheng and Liu, 2008) applied the gray system in grain yield prediction and put forward the gray relational analysis BP artificial neural network model for corn production prediction. Generally agreed that there are many factors affect gain yield. The main affect factors are plantings, water, farming techniques, seeds, fertilizers, etc. However, the grain yield fluctuation trend shows a high nonlinearity and uncertainty which leads to accurately predict the gain yield is difficult. Artificial neural network prediction method can better handle the nonlinear and uncertain problems (Wei, 2013; Sadeghi, 2000) but it also has many shortcomings, such as: model training slow; time and space complexity is high; easy to fall into local optimum.

Particle Swarm Optimization (PSO) is a population based stochastic optimization technique developed by Kennedy and Eberhart (1995), as a group intelligent search algorithm, it through population cooperation and competition between the particles to guide group search. And it has many merits, such as parallel global search, the model is simple and convenient, few parameters need to be adjusted, convergence is fast and easy implementation (Sadeghi, 2000). Thus, using PSO algorithm for BP Neural network pre-search can overcome the deficiencies of BP algorithm. However, when there are more locally optimums, standard PSO algorithm also easy to fall into local optimum. Many researchers have made studies for improving the PSO algorithm and achieved some success (Kennedy and Eberhart, 1995; He and Guo, 2013; Yan et al., 2013; Hao et al., 2011). This study proposed the grain yield prediction method based on improved PSO-BP and the prediction results show that the prediction model can effectively improve the prediction accuracy.

BP NEURAL NETWORK

Artificial neural networks are powerful tool for prediction of nonlinearities. These mathematical models comprise individual processing units called neurons that resemble neural activity. Each processing units sums weighted inputs and then applies a linear or non linear function to the resulting sum to determine the output. The neurons are arranged in layers and are combined through excessive connectivity. With hierarchical feed forward network architecture, The back-propagation network has received most attention.

Typically, three-layer BP neural network (input layer, hidden layer and output layer) can realize the function mappings of n independent variables and the m dependent variables. In the study of BP neural network, the main features are forward transformer of input signal and back-propagation of error. Network’s weights and threshold values are adjusted according to the prediction error.

The signal inputted from outside spreads to the output layer and gives the result through processing layer for layer of neurons in input layer and hidden layer. If the expected output can’t be obtained in output layer, it shifts to the conversed spreading processing and the true value and the error outputted by network will return along the coupled access formerly. The error is reduced by modifying contacted weight value of neurons in every layer and then it shifts to the positive spreading processing and revolves iteration until the error is smaller the given value (Su et al., 2006; Wu and Li, 2002). The topological of BP neural network is shown in Fig. 1.

Here, X1, X2,…,Xn are the input values of neural network, Y1, Y2,…, Yn are the predictive values, ωij and ωi5 are network’s weights. Before using, the first task is to train the network.

Fig. 1:Topological of BP neural network

The training process included the following steps:

Step 1: Initialize the network. According to the input and output of actual system, determine the numbers of input layer nodes, hidden layer nodes and output layer nodes, initialize ωij, ωik and the threshold value of both hidden layer and output layer and set the learning rate and the neuron activation function
Step 2: Calculate the hidden layer output based on Eq. 1:

(1)

where, l is the number of nodes in hidden layer, aj is the threshold value and f is the activation function of hidden layer. In this study, select Eq. 2 as f:

(2)

Step 3: Calculate the output value of output layer based on Eq. 3:

(3)

  where, bk is the threshold value of output layer node
Step 4: Calculate the prediction error according to the network predicted output and the desired output:

(4)

Step 5: Update the connection weights by the prediction error ek:

(5)

(6)

  where, η is the learning rate
Step 6: Update the threshold value based on Eq. 7 and 8:

(7)

(8)

Step 7: Determine whether the iterative ends and if not, return to Step 2

PSO ALGORITHM AND ITS IMPROVEMENT

Standard PSO algorithm: Given in a Q-dimensional search space, there is a particle community composed of n particles. And the relevant parameters of i-th particle are denoted as follows: The position vector is denoted by xi = (xil, xi2,…, xiQ), i = 1, 2,…, n. The flying speed is denoted by vi = (vil, vi2,…, viQ). Up to now, the searched optimal location of I-th particle is denoted by pi = (pil, pi2,…, piQ) (Namely Pbest). The searched optimal location of the whole particles community is denoted by pg = (pgl, pg2,…, pgQ) (Namely Gbest). To search the optimal solution in Q-dimensional space is to search the particle in best position. According the three principles, maintain its inertia, maintain its optimal position and maintain community optimal position, the particle updates its status during the moment.

In every iteration, the particles update their velocity and position by Eq. 9:

(9)

where, ω denotes inertia weight and used to maintain the original rate coefficients. c1 and c2 denote learning factor and acceleration coefficients, respectively. ξ and η are the uniformly distributed random numbers during 0 and 1. λ is constraint factor. [-vmax, vmax] is velocity range for each dimension of particle. Standard PSO algorithm flow is shown in Fig. 2.

Improvement of PSO: For the standard PSO algorithm is easy to fall into local optimum problem, this study introduced the mutation operation to PSO algorithm. The basic idea is to re-initialize the particle after each update with a certain probability. The adaptive mutation operation method for i-th particle is as follows:

(10)

where, xij denotes the j-th component of particle xi, P denotes mutation probability, r the uniformly distributed random numbers during 0 and 1, xrandom denotes random number during individual maximum and minimum position of particle.

Fig. 2:Standard PSO algorithm flow

Research shows that the liner decreasing inertia weigh can better balance the global search ability and local search ability. This study adopts the following method to get inertia weight value:

(11)

where, ωstart denotes initial inertia weight; ωend denotes inertia weight of maximum iteration number, k denotes current iteration number, Tmax denotes maximum iteration number.

IMPROVED PSO-BP NETWORK

BP neural network learning process is the update process of the connection weights and thresholds of the network. The purpose of using PSO algorithm optimize BP neural network is to get better network initial weights and thresholds. The basic idea is to use the position of each individual particle in PSO to represent all of the initial network connection weights and threshold parameters. Then take the individual initialized BP neural network prediction error as the individual’s fitness value and through the particle network optimization to find the best initial weights and thresholds.

The detailed algorithm can be summarized as follows:

Design and initialize the network, normalize the samples
Initialize PSO, such as, population size, particle structure, location and speed
Calculate fitness value of each particle. The paper takes Eq. 4 as particle fitness function:

(12)

 Where, N denotes training sample number, di denotes the desired output of i-th sample. yi denotes network computing values of i-th sample
According to the fitness value of each particle, update its personal best position Pbest and global best position Gbest
According to Eq. 9, adjust the position and velocity of particle
According to Eq. 10, make adaptive mutation operation
If the convergence criteria is met (the number of iteration is reached or the error can accepted), stop iteration. And the Gbest is the initial parameter values of BP network, Through further learning and training of BP algorithm can form the predict model. Otherwise, go to step 3 for the next iteration

GRAIN YIELD PREDICTION BASED
ON IMPROVED PSO-BP

According to previous studies and China Statistical Yearbook, there are many factors affecting grain yield including effective irrigated area, the total number of people engaged in agricultural production, grain sown area, the disaster area, village hydropower generating capacity, total agricultural mechanical power, agricultural infrastructure investment and consumption of fertilizer and other factors. In the proposed prediction model, take effective irrigated area (k ha m-2), consumption of fertilizer (million tons), the disaster area (k ha m-2), grain sown area (k ha m-2), total agricultural mechanical power (mW steam) and agricultural infrastructure investment (billion yuan) as inputs and take grain yield (million tons) as the output. Thus the BP neural network structure is shown in Fig. 3. Then, take the collected sample data from 19990 to 2001 as training sample data and the sample data from 2002 to 2007 as testing sample data. In the test, the relevant parameters of PSO algorithm are as follows: The number of iteration is 50, population size is 20, c1 = 1.49445, c2 = 1.49445 and the length of each particle is 41. each generation best individual fitness curve of improved PSO algorithm optimization process is shown in Fig. 4. The training error curves of the standard BP network and the BP network optimized by improved PSO are shown in Fig. 5 and 6, respectively. The obtained optimal initial weights and thresholds of BP neural network is shown in Table 1. Predictions contrast of the proposed method and other method for predicted grain yield from 2002 to 2007 is shown in Table 2.

Table 1:Optimal initial weights and thresholds

Fig. 3:BP network structure for grain yield prediction

Table 2:Prediction contrast

Fig. 4:Best individual fitness curve of improved PSO

Fig. 5:Training error curve of standard BP network

As can be seen from Fig. 4, the best individual fitness obtained by improved PSO-BP neural network method has better optimization capability in evolution than that of standard BP neural network. Under the same training accuracy and by comparing Fig. 5 and 6, it can be seen that the improved PSO method can meet the convergence (0.00001) at 12th generation and obviously superior to conventional BP network (25th generation). By data comparison of Table 2, the prediction accuracy of improved PSO-BP method is superior to that of conventional BP network method for the same statistics data.

Fig. 6:Training error curve of BP network optimized by improved PSO

The maximum relative error of BP network method and improved PSO-BP method are 11.2 and 4.6%, respectively. And also, as for grain yield production, the maximum relative error of paper proposed method and Nie (2012) method are 0.01 and 5.7%, respectively. From the comparison, it can be seen that the proposed grain yield method is effective and feasible.

CONCLUSION

As a complex agricultural and statistical issues, grain yield prediction is affected by many factors. And also, its historical data is limited which makes it difficult to accurately predict. The study proposed a improved PSO-BP based grain yield prediction method which optimized the BP neural network parameters through improved PSO and effectively improved the overall learning ability and overcome the problem of easy to fall into local optimum. The test results for 2002-2007 grain yield shows the proposed method is significantly better than BP neural network method and Grey-Relational support vector machine based method and has good application prospects.

REFERENCES

  • Ma, W. and Z. Feng, 2008. China grain production factors analysis-based on the empirical analysis on panel data. Shanxi J. Agric. Sci., 1: 163-166.


  • Nie, S., 2012. Grain production prediction based on grey-relational support vector machine. Comput. Simul., 29: 220-223.


  • Li, J.X., 2009. A projection of henan grain production based on grey systems models. J. Henan Univ. Technol. (Soc. Sci. Edn.,), 5: 1-3, 7.
    Direct Link    


  • Su, B., L. Liu and F. Yang, 2006. Comparison and research of grain production forecasting with methods of GM(1,N) gray system and BPNN. J. China Agric. Univ., 11: 99-104.


  • Wu, Y. and J. Li, 2002. Non-linear artificial neural network model and its application in corn production prediction. J. Henan Normal Univ. (Nat. Sci.,), 30: 35-38.


  • Cheng, W. and G. Liu, 2008. Application of gray system to grain yield prediction. J. Hunan Inst. Eng. (Nat. Sci. Edn.,), 16: 62-64.
    Direct Link    


  • Wei, X., 2013. Sensor temperature compensation technique simulation based on BP neural network. Telkomnika, 11: 3304-3313.
    Direct Link    


  • Sadeghi, B.H.M., 2000. A BP-neural network predictor model for plastic injection molding process. J. Mater. Process. Technol., 103: 411-416.
    CrossRef    Direct Link    


  • Kennedy, J. and R. Eberhart, 1995. Particle swarm optimization. Proceedings of the International Conference on Neural Networks, Volume 4, November 27-December 1, 1995, Perth, WA., USA., pp: 1942-1948.


  • He, J. and H. Guo, 2013. A modified particle swarm optimization algorithm. Telkomnika, 11: 6209-6215.
    CrossRef    


  • Yan, X., Q. Wu and H. Liu, 2013. Orthogonal particle swarm optimization algorithm and its application in circuit design. Telkomnika, 11: 2926-2932.
    CrossRef    


  • Zhao, C.Y., Z.B. Yan and X.G. Liu, 2011. Improved adaptive parameter particle swarm optimization algorithm. J. Zhejiang Univ. (Eng. Sci.), 39: 1039-1042.
    Direct Link    

  • © Science Alert. All Rights Reserved