Subscribe Now Subscribe Today
Research Article
 

Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering



Nameer N. EL-Emam and Nadia Y. Yousif
 
Facebook Twitter Digg Reddit Linkedin StumbleUpon E-mail
ABSTRACT

This study presents a new technique based on back-propagation algorithm and Adaptive Neural Networks (ANN) to compute a compressible flow represented by velocity profiles through symmetrical double steps channels. The technique adopts a new clustering algorithm named a Dual-level Clustering (DC), which is based on the speed of flow, the flow deflection and the average of false diffusion error . A learning system of three stages is employed, whose first two stages are run simultaneously while the third works as an optimizer stage. The first stage constructs a finite element analysis employing adaptive incremental loading to select appropriate patterns effectively. The second stage concerns an ANN with Modified Adaptive Smoothing Error (MASE), whereas the third stage is to classify a set of patterns into DC to reduce the effect of errors. The proposed training algorithm is fast enough and the simulation results of the learning system are in harmony with the available previous works. The success of such algorithm can be attributed to three reasons. The first is the employment of ANN that is an excellent approximator, especially if training starts from laminar flow patterns. The second reason is the speed of training due to the use of ANN with learning rate α = 0.7 on the few number of selected patterns equal to 464 and the third reason is satisfying the stability and the accuracy for high range of Reynolds numbers (Re) Re = 4500 due to the use of MASE and DC based on false diffusion error.

Services
Related Articles in ASCI
Similar Articles in this Journal
Search in Google Scholar
View Citation
Report Citation

 
  How to cite this article:

Nameer N. EL-Emam and Nadia Y. Yousif, 2009. Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering. Journal of Applied Sciences, 9: 2501-2518.

DOI: 10.3923/jas.2009.2501.2518

URL: https://scialert.net/abstract/?doi=jas.2009.2501.2518
 

INTRODUCTION

A computer simulation of compressible fluid flow is of particular importance in designing pipe networks, channels, diffuser and aerodynamic using Navier-Stokes Equations (NSE), which are non-linear Partial Differential Equations (PDE). These equations have widest applications as they govern the motion of every fluid, being a gas or liquid or a plasticized solid material acted upon by forces causing it to change the shape. The popular methods for the numerical solution of PDE’s are Finite Difference Analysis (FDA), Finite Element Analysis (FEA), Boundary Element Analysis (BEA) and Finite Volume Analysis (FVA) (Glowinski and Neittaanmaki, 2008).

The earliest solution of (NSE) used non-simultaneous solver through FEA to implement velocity-vorticity (u-v-ω) formulation (Singh and Li, 2007). In recent years, the FEA has been employed quite expensively in predicating laminar, transition and turbulent flow (Glowinski and Neittaanmaki, 2008). It is also costly and not easy to solve specially for high range of Reynolds numbers (Re) due to the long processing time required to attain convergence. Moreover, it is very expensive with respect to the storage for the refinement mesh points (EL-Emam, 2006; EL-Emam and Shaheed, 2008).

Recently, many researchers have demonstrated a neural network approach that is fast and reliable for predicting complex problem solving (Vodinh et al., 2005). This approach is based on many types of architectures, such as an artificial neural network with single/multi hidden layer(s) (Lefik and Wojciechowski, 2005; Meybod and Beigy, 2002). Developing neural models in fluid applications was implemented by Golcu (2006) to study the Head-flow curves of deep well pump impellers with splitter blades.

Neural networks with a clustering approach were developed by many researches. Some methods are classified as Agglomerative hierarchical methods such as: Single Linkage, Complete Linkage, Average Linkage, Median Linkage, Centroid, Ward and others (He et al., 2005). Frossyniotis et al. (2005) applied a multi-clustering method based on combining several runs of a clustering algorithm to obtain a distinct partition of the data which is not affected by the initialization.

In this study, a Dual-level Clustering (DC) technique based on back-propagation algorithm and Adaptive Neural Network (ANN) with Modified Adaptive Smoothing Error (MASE) is proposed to compute a compressible flow represented by velocity profiles through symmetrical double steps channels. The clustering is based on the speed of flow, the flow deflection and the average of false diffusion error Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering. The clusters and their sub-clusters are fixed to three according to flow regime (laminar, transition and turbulent) while the number of patterns for each sub-cluster is changing according to DC. The proposed training algorithm overcomes a large amount of training time, instability and inaccuracy of results and a large number of patterns for each cluster.

PROBLEM DEFINITION

Fluid agitation and mixing can be generated by the fluid flowing from one step to another in symmetrical double steps channels when sudden expansion of flow diameter occurs (Fig. 1). This, however, increases the main velocity of the separating flow that is produced by the other step. Such problem presents an even greater challenge on the stability and convergence capability of solution procedure. Our motivation is to enhance the stability and convergence criteria through implementing DC algorithm with ANN approach.

PROBLEM FORMULATIONS

Equation 1-3 shown represent the governing equations (NSEs) for two-dimensional steady state of compressible flow in terms of the primitive variables u-v-p (Glowinski and Neittaanmaki, 2008; Vodinh, et al., 2005; Eker and Serhat, 2006), where u-v are velocity functions, p is the pressure function, μ is fluid dynamic viscosity, ρ is the fluid density and Re is defined as ρVd/μ, with d being a characteristic length chosen to be the width of channel and V is the inlet velocity.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(1)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(2)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(3)

The second order velocity equations are obtained by manipulating the vorticity ω and continuity equation (Eq. 4, 5):

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(4)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(5)

The vorticity transport equation (Eq. 6) is obtained by taking the curl of the momentum equations and eliminating the pressure term.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(6)

Where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(7)

NUMERICAL SOLUTION OF FLUID FLOW USING FEA WITH ADAPTIVE INCREMENTAL LOADING

FEA requires that the domain under consideration is to be partitioned into a number of elements which are defined by a certain fixed topology in terms of nodes. Breaking the original domain into a set of elements is not an easy job, especially for problems with moving boundaries, free surface, or complex boundary (Glowinski and Neittaanmaki, 2008) as in Fig. 1a-c.

Obviously, numerical formulas of Eq. 4-6 are always subject to evaluation with unsatisfactory accuracy and possible errors. An approximate solution of numerical formula using FEA gives the high accuracy and stability when adaptive mesh points are considered as in Fig. 1 to overcome the problem of discretization error. The amount of this error depends on four factors: order of shape function, size of elements, shape of elements and arrangement of elements in the domain (El-Emam, 2006; El-Emam and Abdul Shaheed, 2008).

The numerical solution is highly confident in comparison with experimental results introduced by (Glowinski and Neittaanmaki, 2008). Unfortunately, this solution needs more time regarding the amount of computation for Eq. 4-6 to find fluid flow behavior (patterns) for all ranges of Re (0-4500). Consequently, the adaptive incremental loading is effectively introduced in this study to reduce the number of patterns.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 1: (a-c) Three types of symmetrical double steps channels with mesh points

The numerical solution of u-v-ω functions on channels such as those shown in Fig. 1 requires the following boundary conditions:

Condition 1: ∀ (x, y) ∈ AB (inlet), Dirichlet (C0–BC) is used to find u (x, y) and v (x, y) as follows:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(8)

Condition 2: ∀ (x, y) ∈ BC|CD|DE|EF|FG (Lower solid boundaries) and by using Dirichlet (C0–BC), we have:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(9)

where, ω (x, y ) is not specified at this side of boundary. FEM is used to find its approximation value.

Condition 3: ∀ (x, y) ∈ GH and by using C1 B.C and C0 B.C for u (6, y) and v (6, y), respectively, we obtain:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(10)

where, (n) refers to the normal boundary direction.

Condition 4: ∀ (x, y) ∈ AH and by using C1 B.C and C0 B.C for u (x, 1) and v (x, 1), respectively, we have:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(11)

Equations 12-14 are obtained from Eq. 4-6 with the use of Galerkin weighted residual method based on adaptive incremental loading (Glowinski and Neittaanmaki, 2008; Kaczmarczyk and Waszczyszyn, 2005), where, N1i, N2i and N3i are linear Lagrange polynomials (weighted functions) for the quadrilateral elements (Vodinh et al., 2005) in the channel’s domain Ω.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(12)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(13)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(14)

FEA with modified Newton’s method are used to find the variation-vectors δu, δv and δω (Eq. 15-17). The δRe is the adaptive incremental loading and its value is changed from pattern to next patterns according to the value of Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering, where, R1i, R2i and R3i are the residuals at node i.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(15)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(16)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(17)

Substituting the shape functions to obtain the following:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(18)

Where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(19)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(20)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 2: Cooperation between FEA and ANN

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(21)

and

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(22)

Where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(23)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(24)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(25)

and

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(26)

Where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(27)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(28)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(29)

In this study, simultaneous solvers on the numerical model represented by Eq. 18, 22 and 26 are implemented to find the fluid flow behavior as an input pattern to the proposed three stages (Fig. 2).

NEURAL NETWORKS MODELING

For nonlinear rheological phenomena, neural networks approach is promising as an alternative technique, where a neural network might consists of a large number of highly interconnected neurons. A fully connected multi-layered neural network is presented in this study. Basically, there are four key-parameters that characterize a neural net architecture: the first is the number of layers, the second is the number of neurons at each layer, the third is the kind of connectivity among layers and the fourth is the kind of activation function used within each neuron (Golcu, 2006; Kaczmarczyk and Waszczyszyn, 2005; Payne, 2006). Neural network can in principle have any number of layers; each consists of various number of neurons. The most common neural network architecture which is comprised of 3 layers (Hovakimyan et al., 2002) (Input, Hidden and Output) is used in this study.

Figure 3 shows simple 3-layers (n-p-m) neural network architecture with (n) neurons in the input layer, (p) neurons in the hidden layer and (m) neurons in the output layer.

Usually, the number of neurons in the input layer is equal to the number of the available features. The present training algorithm uses Rek+1 and the velocity functions u-v for Rek as an input features, where, k is the current iteration index.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 3: A 3-Layers neural network architecture with full connections

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 4: Neural network layers with BPA

The number of neurons in the output layer equals to the output features represented by the velocity profile for Rek+1. On the other hand, the number of neurons in the hidden layer needs to be adjusted during training. Usually, there is a trade-off between accuracy of the output and number of neurons in the hidden layer. Complex problems require a large number of neurons at the hidden layer and the kind of connectivity among layers can be full or partial (Frean et al., 2006). In the proposed training algorithm, a fully connected architecture is applied where each neuron in a layer is connected to all neurons in the next layer as shown in Fig. 4.

TRAINING OF COMPRESSIBLE FLUID FLOW

The training process presented in this paper employs back-propagation with ANN through three steps: the feedforward of the input training pattern, the back-propagation of the associated error (BP) and the adjustment of the weights (ADJ). Figure 4 shows the (n-p-m) neural network, where the solid arrow refers to many-to-one or one-to-many transitions, the doted arrow refers to one-to-one transition and the dashed arrow refers to send action for adjustment process.

During feedforward step, each input neuron Ii, i =1…n receives an input signal and broadcasts this signal to each of the hidden neurons Hj, j =1… p, as in Eq. 30.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(30)

Each hidden neuron computes its activation and sends its signal to each of the output neurons Ok, for k = 1…m. Each output neuron Ok computes its activation as in Eq. 31 to form the output signal of the network.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(31)

The type of activation function implemented in this work is bipolar sigmoid working in the range [1, -1]. This function is given in Eq. 32 and its first derivative is shown in Eq. 33.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(32)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(33)

The first derivative of the error factor represented by Δk, k = 1…m (Eq. 34) is computed to show the associated error for the specific pattern at the output layer (Meybod and Beigy, 2002; Hovakimyan et al., 2002). This error is used to adjust the weight Wjk between the hidden neuron Hj and the output neuron Ok as illustrated in Eq. 35, where β ∈[0, 1] is the damping parameter.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(34)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(35)

Similarly, the first derivative of the error factor Δj, j = 1,…, p (Eq. 36) is computed to show the associated error for the specific pattern at the hidden layer. The adjustment to the weight Vij between input neuron Ii and hidden neuron Hj is based on the factor Δj and the activation of the output neuron as illustrated in Eq. 37.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(36)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(37)

The adjustment on the weight function is defined in Eq. 38-39.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(38)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(39)

In this study, β = 0.1 is assumed and the Adaptive Learning rate AL is used to improve the speed of training by changing the rate of learning ∏± during training process as shown in Eq. 40a (EL-Emam, 2006; Lefik and Wojciechowski, 2005; Meybod and Beigy, 2002; Golcu, 2006; Maira et al., 2002).

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(40a)

The training is repeated several times to update the old values of the two dimensional arrays V and W until convergence criteria given in Eq. 40b is satisfied.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(40b)

In addition, the present study introduces a new technique to improve the training process. This technique is implemented to reduce the effect of errors and speed up training by using DC. The proposed algorithm performs two clustering levels, the first level includes three clusters depending on the type of flow (laminar, transition and turbulent) while the second level (sub-clusters) depends on the following:

The average of all angles θ3 for all elements in the specific pattern (Eq. 43), where θ3 angle represents an inclination of velocity from the element’s direction as in Fig. 5
The patterns ordered in ascending order within each sub-cluster according to the value of Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering (Eq. 42a-b). In the next section, this point is discussed to show the effect of sorted patterns on the performance of the training algorithm

Figure 5 shows inclination of an element represented by ξ–η axis with respect to the X-Y axis and velocity direction V, where θ1 is the angle between ξ-axis and X- axis, θ2 is the angle between velocity vector V and X-axis and θ3 is the angle between ξ-axis and velocity vector V.

Figure 6a shows a finite state automata transition graph to describe formally the first phase of the present training algorithm, where, T is the transition, S is the smoothing error and N is moving to the next pattern.

In this study, three clusters denoted by Ci i = 1, 2, 3 are proposed. Each cluster includes a number of selective appropriate patterns corresponding to diverse Re, which is equal to m’, m” and m”’ patterns for laminar, transition and turbulent clusters, respectively.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 5: Velocity inclination for one element

Table 1: Clusters’ intervals
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

These patterns are selected by using FEA with adaptive incremental loading on specific type of double steps channel. In the next section, we discuss that a few number of patterns is enough to produce an effective learning system for compressible flow. The proposed intervals for each cluster used in this work are shown in Table 1.

Figure 6b shows the finite state automata transition graph for the second phase of the training algorithm, where the dashed state refers to the final state of transition graph. In addition, the transition label MCE refers to maximum error of clusters (Eq. 44a, b), the label ME refers to the maximum error of neurons (Eq. 44c) and NF refers to the next value of Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering (Eq. 42b). Additionally, the pattern Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering in Fig. 6b represents the jth pattern in the ith cluster and sth sub-cluster for all i = 1, 2, 3 and s = 1, 2, 3. The numbers of patterns in sub-cluster 1 for laminar, transition and turbulent clusters are equal to N’, N” and N”’, respectively; M’, M” and M”, respectively for sub-cluster 2 and K’, K” and K”’, respectively for sub-cluster 3, where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(41)

Equation 42a is used to compute the error Γe for each element in the channel domain, where this error depends on Re, mesh size, channel shape and inclination of velocity direction to the grid.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 6: Transitions automata graph for the (a) 1st phase of training on three clusters and (b) 2nd phase of training algorithm

The maximum value of Γe is reached when the angle between velocity vector V and X-coordinate axis is equal to ρ π4.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(42a)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(42b)

THE TRAINING ALGORITHM

The training algorithm is implemented through two phases. The first phase is to find appropriate patterns with their training process and the second phase is to reduce the effect of errors. The second phase requires DC sub-algorithm to apply hierarchical clustering (He et al., 2005; Silvestre et al., 2008) for each cluster. This kind of clustering is worked iteratively to agglomerate patterns into three sub-clusters and it is based on two steps: the first is used to find the amount of flow deflection for each pattern while the second is to sort patterns in ascending order at each sub-cluster according to Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering. The two phases of the training algorithm and the sub-algorithm are presented below:

First phase of the training algorithm: Find appropriate patterns with their training process as the following steps:

Step 1: Let q1 be the index of external iterations for training clusters
  Let q2 be the index of internal iterations for training patterns
  Let u be the size of the current cluster
  Let ρ be iteration’s index for training patterns
  Let σ be the maximum number of iterations for training patterns, which is large enough
  Let ∏ max1 be maximum number of iterations for the external loop (from cluster to next cluster)
  Let ∏ max2 be maximum number of iterations for the internal loop (from pattern to next pattern)
  Let m’ = m” = m”’ = 1// the initial setting of the
    number of patterns for each cluster
  Let δRe1 = 50; //the initial value of the
    incremental loading

Step 2: For (q1=0; q1 < ∏ max1; next q1)

Step 2-2: If ( i = 1) then u = m’;
    elseif (i = 2) then u = m”;
    elseif (i = 0) then u = m”’;

Step 2-3: For (q2=0; q2 < ∏ max2; next q2)

Step 2-3-1: j = (q2+1) mod u; // since there are u patterns in the cluster

Step 2-3-2: Implement FEA to find Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clusteringfor Rej+1 (Eq. 26-29) // if it is not calculated before

Step 2-3-3: For ( ρ =1; ρ <= σ; next ρ )

Step 2-3-3-1: Apply training on Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering;

Step 2-3-3-2: Update the value of weight functions (Eq. 38-39), with the smoothing (Eq. 35, 37) for the cluster Ci.

Step 2-3-3-3: Using Eq. 40b to check the convergence of the training process for the pattern Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Step 2-3-3-3-1: If convergence is satisfied then break training.

Step 2-3-4: Implement the proposed adaptive incremental loading:
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Step 2-3-5: Increment by one the number of patterns for the current Cs {m’, m”, or m”’}

Step 3: Set the final number of patterns to each cluster, m’, m”, m”’, (Eq. 41).

Second phase of the training algorithm: Using DC sub-algorithm to reduce the effect of errors as the following steps:

Step 1: Call DC sub-algorithm (it is presented below).

Step 2: From clusters domain, select the ith cluster Ci that has a maximum sum of sub-cluster’s error (MSE) (Eq. 43-44)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(43)

Step 2-1: From sub-cluster SC at cluster i, select sub-cluster indexed s that has a maximum sum of neuron’s error (Eq. 44a-b).

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(44a)

Step 2-1-1: For each pattern Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering at the sub-cluster SCs, find the maximum sum of neurons’ errors Eq. 44b.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(44b)

Where:

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(44c)

Step 2-1-2: Apply training on the jth pattern Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Step 3: If clusters’ errors are not flat then go to step 2.

DC Sub-algorithm: Apply hierarchical clustering for each cluster as the following steps:

Step 1: For each cluster Ci, i = 1, 2, 3 and their sub-cluster SCs, s = 1, 2, 3
Put SCs ={ } ∀s

Step 1-1: For every pattern Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Step 1-1-1: For every element e in Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering e = 1…100

Step 1-1-1-1: Find the angle θ3 (Eq. 45).

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(45)

Step 1-1-2: Find Av (θ3) (Eq. 46)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(46)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering

Step 2: Using Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering Eq. 42a-b to sort patterns for each sub-cluster s.

RESULTS AND DISCUSSION

To construct a learning system for compressible fluid flow on two-dimensional channels as shown in Fig. 1, it is necessary to find number of patterns for each cluster and sub-cluster for those channels. These patterns are selected through the training system stages (Fig. 2). The numerical model using FEA with adaptive incremental loading is constructed from Eq. 18, 22 and 26 and executed them simultaneously to achieve these patterns.

The symmetrical double steps channel’s domain is divided into finite elements using adaptive mesh point technique (El-Emam, 2006; El-Emam and Shaheed, 2008) as shown in Fig. 1. This technique is used to provide an adequate mesh point refinement at the critical region of the specific channel. We suggest that the sufficient number of elements used in this study is equal to 100 elements.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 7a: Symmetrical double backward step channel with expansion ratio = 2/3

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 7b: Recirculation length in a single step channel vs Reynold No.

It is important to check if the proposed numerical simulation using FEM with adaptive incremental loading leads to physically coherent and accurate results. As a consequence, the proposed approach is compared with the existing experimental and numerical literatures (Glowinski and Neittaanmaki, 2008; Vodinh et al., 2005). This comparison is implemented on steps channel with expansion ratio equal to 2/3 (Fig. 7a) to show the variation of the length of the separated region as function of Re numbers. The agreement appears to be fairly good over all Re numbers as shown in Fig. 7b.

In addition, the dependence of the recirculation length (normalized with respect to the step height) on Re is shown in Fig. 7b and it is apparent that the ratio X/h increases almost linearly with Re, reaches a peak value (~7) at laminar regime conditions and reaches a constant value (~14) at transition/turbulent conditions.

This study concentrates on the training system and percept the essential problems that are playing basic role on the training efficiency. Such problems are evaluated and removed eventually through AL with DC technique. In this section, we define a set of factors to speed up training system and to reduce the effects of errors.

Channel geometry factor: The training system is applied on three types of channels that were shown in Fig. 1. The channel geometry affects the speed of training as shown in Fig. 8, in which we observe that channel 3 consumes a minimum number of iterations due to the fully slopped enlargements of the channel shape. This type of channel makes two vortices near the steps are slipped to the outlet with minimum mixing, whereas channel 2 consumes a maximum number of iterations due to the partially slopped the channel’s steps which generates higher velocities near the wall and push vortex region into the center of channel to generate the maximum mixing. Reduction of errors is achieved when the training is forwarded through clusters starting from the laminar regime.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 8: Speed of training on three types of channels

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 9: Speed of training with /without AL on (a) channel 1, (b) channel 2 and (c) channel 3

Rate of learning factor: For training process, two approaches are proposed. The first is based on using adaptive learning rate (AL) on the learning factor ∏± according to Eq. 40a, where, the values of parameters K and γ are estimated to 0.015 and 0.8, respectively. The second concept is to fix the value of learning rate ∏± Figure 9a-c show that the speed of training process with AL is more stable and faster than the training without AL.

The initial rate of learning factor: This factor is very important to speed up training process and to get high accuracy too. Accordingly, we study the behavior of the present training process with respect to the number of iterations and errors distribution. Figure 10 shows the speed of training process with respect to different values of learning rate (∏±) on three channels. The maximum peak is appeared at (∏± = 0.1) while the minimum peak appears at (∏± = 0.8). The best result is reached when (∏± = 0.7) is selected due to the minimum error distribution (Fig. 11).

Using DC factor: In the second phase of the proposed training algorithm, DC is used effectively to reduce the effects of errors and the order of patterns for each sub-cluster. These are playing basic roles on the amount of damping error, so that it is necessary to sort patterns in ascending order according to Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering for each sub-cluster. On the other hand, DC is not promising when patterns are selected randomly or sorted according to Euclidean distance presented in Eq. 47a, where, IP and OP are the input and output signals of the pattern P.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(47a)

Figure 12a-c show the comparison between sum of pattern’s errors and rate of learning with/without DC and with/without sorting patterns for each sub-cluster in the three channels.

In addition, the learning process with DC is worked well when a pattern with a maximum total error (Eq. 43, 44) is selected rather than selecting the one that includes neuron with maximum error (Eq. 47a-c).

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(47b)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(47c)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 10: Speed of training process with respect to the rate of learning

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 11: Sum of pattern’s errors with respect to the rate of learning

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
(47d)

Figure 13a-c show the level of error if the pattern is selected according to Eq. 43, 44 and Eq. 47b-d for the three channels. It appears that the error factor is oscillated through the steps of dapping error, whereas training process is more stable if pattern is selected according to Eq. 43, 44. In addition, the error remains the same for all types of channels when Eq. 43, 44 are implemented on the contrary to that implementing Eq. 47b-d, which show instability and changes from channel to another. It is obvious that the maximum error occurs at channel 2 while the minimum error occurs at channel 3.

Fluid flow behavior and number of patterns factor: The behavior of the fluid flow is also affected when patterns are selected by using adaptive incremental loading γRe. Figure 14a shows the number of patterns for each cluster at each channel and Fig. 14b-d show the number of patterns for each sub-cluster at the three channels. It appears that channel 2 needs a maximum number of patterns equal to 223 that are apportioned on three clusters C1, C2 and C3 with 140, 43 and 40 patterns, respectively. While channel 1 needs 137 patterns; these patterns are apportioned on three clusters C1, C2, C3 with 70, 37 and 30 patterns, respectively. Finally, channel 3 needs a minimum number of patterns equal to 104; these patterns are apportioned on three clusters C1, C2, C3 with 43, 31 and 30 patterns, respectively.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 12: Comparison between sums of pattern’s errors vs. rate of learning for (a) channel 1, (b) channel 2 and (c) channel 3

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 13: Comparison between methods of selecting patterns for (a) channel 1, (b) channel 2 and (c) channel 3

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 14: No. of patterns for each (a) cluster at each channel, (b) sub-cluster at channel 1, (c) sub-cluster at channel 2 and (d) sub-cluster at channel 3

COMPUTER SIMULATION RESULTS

The proposed learning system calculates the velocity of the compressible flow through double steps channels for various values of Re in the interval (100-4000). Figure 15a-d-17a-d show these results. It appears that small vortexes exist at the steps of channels. As Re increases, the vortices region extends in length and the upper vortex spins over the lower vortex. The interaction of two vortices diverts both of them slightly upward into the center of channel especially for channel 2. Strong interaction has appeared when the value of Re is increased.

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 15: (a-d) Velocity profile for compressible flow through channel 1 (Re = 100-4000)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 16: (a-d) Velocity profile for compressible flow through channel 2 (Re = 100-4000)

Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering
Fig. 17: (a-d) Velocity profile for compressible flow through channel 3 (Re = 100-4000)

In addition, the new small vortices are generated between two main vortices especially for channel 1 and channel 2.

CONCLUSION

The present study demonstrates a new training algorithm based on two phases through a successful ANN with DC. The proposed algorithm is used to predict compressible fluid flow through specific type of channels geometry (Fig. 1). FEA with a new approach named adaptive incremental loading is utilized at the first phase of the training algorithm to prepare appropriate patterns. This studies simultaneously beside a training system using ANN with DC. The proposed structure of the neural network, which works through back propagation algorithm, includes one input, one hidden and one output layers. It has been shown that it is possible to use few patterns to simulate data with reasonable accuracy for three clusters (laminar, transition and turbulent) that are represented by velocity profile function and it gives encouraging results in many fields of applications (fluid mixing, fluid agitation, etc.) (Yu and Morales, 2005; Rajkumar and Bardina, 2002; Creuse and Mortazavi, 2004). The success of the proposed training algorithm can be attributed to three factors. The first is the employment of neural network that is an excellent approximator to any type of flow through any type of channel geometry (regular or irregular shapes) specially, if training process starts from laminar flow patterns (cluster 1). The second factor is the speed of training due to the use of ANN with proper value of AL on the minimum number of selected patterns and the third factor is satisfying the stability and the accuracy for high range of Re due to the use of MASE and DC based on Image for - Computing of Compressible Flow Using Neural Network Based-on Dual-Level Clustering. Finally, a channel’s shape is playing basic role on the speed of training, accuracy of results and the number of patterns.

ACKNOWLEDGMENT

Authors would like to thank Prof. R.H. Al- Rabeh from Cambridge University for his support and help. This support is gratefully acknowledged.

REFERENCES
1:  Creuse, E. and I. Mortazavi, 2004. Simulation of low Reynolds number flow control over a backward-facing step using pulsed inlet velocities. Applied Math. Res. Express, 4: 133-152.
CrossRef  |  

2:  Eker, E. and A. Serhat, 2006. Simulation of fluid Flow in synthetic fractures. Transport Porous Media, 65: 363-384.
CrossRef  |  

3:  EL-Emam, N.N., 2006. Reallocation of mesh points in fluid problems using Back-Propagation algorithm. J. Inform., 9: 175-184.
Direct Link  |  

4:  EL-Emam, N.N. and R.A. Shaheed, 2008. Computing an adaptive mesh in fluid problems using neural network and genetic algorithm with adaptive relaxation. Int. J. Artif. Intell. Tool, 17: 1089-1108.
CrossRef  |  

5:  Frean, M., M. Lilley and P. Boyle, 2006. Implementing Gaussian process inference with neural networks. Int. J. Neural Syst., 16: 321-327.
CrossRef  |  

6:  Frossyniotis, D.S., C. Pateritsas and A. Stafylopatis, 2005. A multi-clustering fusion scheme for data partitioning. Int. J. Neural Syst., 15: 391-401.
CrossRef  |  

7:  Glowinski, R. and P. Neittaanmaki, 2008. Partial Differential Equations: Modelling and Numerical Simulation. Springer, New York.

8:  Golcu, M., 2006. Neural network analysis of head-flow curves in deep well pumps. Energy Convers. Manage., 47: 992-1003.
CrossRef  |  

9:  He, Z., X. Xu and S. Deng, 2005. Tcsom: Clustering transactions using self-organizing map. Neural Process. Lett., 22: 249-262.
CrossRef  |  

10:  Hovakimyan, N., F. Nardi and A.J. Calise, 2002. Adaptive output feedback control of uncertain systems using single hidden layer neural networks. IEEE Trans. Neural Network., 13: 1420-1431.
CrossRef  |  

11:  Kaczmarczyk, L. and Z. Waszczyszyn, 2005. Neural procedures for the hybrid FEA/NN analysis of elastoplatic plates. Comput. Assist. Mech. Eng. Sci., 12: 379-391.
Direct Link  |  

12:  Lefik, M. and M. Wojciechowski, 2005. Artificial neural network as a numerical form of effective constitutive law for composites with parameterized and hierarchical microstructure. Comput. Assist. Mech. Eng. Sci., 12: 183-194.
Direct Link  |  

13:  Meybod, M.R. and H. Beigy, 2002. New learning automata based algorithms for adaptation of backpropagation algorithm parameters. Int. J. Neural Syst., 12: 45-67.
CrossRef  |  

14:  Payne, S.J., 2006. A model of the interaction between auto regulation and neural activation in the brain. Math. Biosci., 204: 260-281.
CrossRef  |  

15:  Rajkumar, T. and J. Bardina, 2002. Prediction of aerodynamic coefficients using neural networks for sparse data. Proceedings of the FLAIRS Conference, May 14-16, 2002, Florida, USA., pp: 242-246.

16:  Silvestre, M.R., S.M. Oikawa, F.H.T. Vieira and L.L. Ling, 2008. A clustering based method to stipulate the number of hidden neurons of MLP neural networks: Applications in pattern recognition. Tend. Mat. Applied Comput., 9: 351-361.
Direct Link  |  

17:  Singh, R.P. and Z. Li, 2007. A mass conservative streamline tracking method for three-dimensional CFD velocity fields. J. Flow Visual Image Process., 14: 107-120.
CrossRef  |  

18:  Yu, W. and A. Morales, 2005. Neural networks for the optimization of crude oil blending. Int. J. Neural Syst., 15: 377-389.
CrossRef  |  

19:  Vodinh, L., B. Podvin and P. Le Quéré, 2005. Flow estimation using neural network. Proceedings of the Turbulence and Shear Flow Phenomena, June 27-27, 2005, Williamsburg, VA., USA -.

©  2021 Science Alert. All Rights Reserved