INTRODUCTION
A computer simulation of compressible fluid flow is of particular importance
in designing pipe networks, channels, diffuser and aerodynamic using NavierStokes
Equations (NSE), which are nonlinear Partial Differential Equations (PDE).
These equations have widest applications as they govern the motion of every
fluid, being a gas or liquid or a plasticized solid material acted upon by forces
causing it to change the shape. The popular methods for the numerical solution
of PDE’s are Finite Difference Analysis (FDA), Finite Element Analysis (FEA),
Boundary Element Analysis (BEA) and Finite Volume Analysis (FVA) (Glowinski
and Neittaanmaki, 2008).
The earliest solution of (NSE) used nonsimultaneous solver through FEA to
implement velocityvorticity (uvω) formulation (Singh
and Li, 2007). In recent years, the FEA has been employed quite expensively
in predicating laminar, transition and turbulent flow (Glowinski
and Neittaanmaki, 2008). It is also costly and not easy to solve specially
for high range of Reynolds numbers (Re) due to the long processing time required
to attain convergence. Moreover, it is very expensive with respect to the storage
for the refinement mesh points (ELEmam, 2006; ELEmam
and Shaheed, 2008).
Recently, many researchers have demonstrated a neural network approach that
is fast and reliable for predicting complex problem solving (Vodinh
et al., 2005). This approach is based on many types of architectures,
such as an artificial neural network with single/multi hidden layer(s) (Lefik
and Wojciechowski, 2005; Meybod and Beigy, 2002).
Developing neural models in fluid applications was implemented by Golcu
(2006) to study the Headflow curves of deep well pump impellers with splitter
blades.
Neural networks with a clustering approach were developed by many researches.
Some methods are classified as Agglomerative hierarchical methods such as: Single
Linkage, Complete Linkage, Average Linkage, Median Linkage, Centroid, Ward and
others (He et al., 2005). Frossyniotis
et al. (2005) applied a multiclustering method based on combining
several runs of a clustering algorithm to obtain a distinct partition of the
data which is not affected by the initialization.
In this study, a Duallevel Clustering (DC) technique based on backpropagation
algorithm and Adaptive Neural Network (ANN) with Modified Adaptive Smoothing
Error (MASE) is proposed to compute a compressible flow represented by velocity
profiles through symmetrical double steps channels. The clustering is based
on the speed of flow, the flow deflection and the average of false diffusion
error .
The clusters and their subclusters are fixed to three according to flow regime
(laminar, transition and turbulent) while the number of patterns for each subcluster
is changing according to DC. The proposed training algorithm overcomes a large
amount of training time, instability and inaccuracy of results and a large number
of patterns for each cluster.
PROBLEM DEFINITION
Fluid agitation and mixing can be generated by the fluid flowing from one step to another in symmetrical double steps channels when sudden expansion of flow diameter occurs (Fig. 1). This, however, increases the main velocity of the separating flow that is produced by the other step. Such problem presents an even greater challenge on the stability and convergence capability of solution procedure. Our motivation is to enhance the stability and convergence criteria through implementing DC algorithm with ANN approach.
PROBLEM FORMULATIONS
Equation 13 shown represent the governing
equations (NSEs) for twodimensional steady state of compressible flow in terms
of the primitive variables uvp (Glowinski and Neittaanmaki,
2008; Vodinh, et al., 2005; Eker
and Serhat, 2006), where uv are velocity functions, p is the pressure function,
μ is fluid dynamic viscosity, ρ is the fluid density and Re is defined as ρVd/μ, with d being a characteristic length chosen to be the width of channel
and V is the inlet velocity.
The second order velocity equations are obtained by manipulating the vorticity
ω and continuity equation (Eq. 4, 5):
The vorticity transport equation (Eq. 6) is obtained by taking the curl of
the momentum equations and eliminating the pressure term.
Where:
NUMERICAL SOLUTION OF FLUID FLOW USING FEA WITH ADAPTIVE INCREMENTAL LOADING
FEA requires that the domain under consideration is to be partitioned into
a number of elements which are defined by a certain fixed topology in terms
of nodes. Breaking the original domain into a set of elements is not an easy
job, especially for problems with moving boundaries, free surface, or complex
boundary (Glowinski and Neittaanmaki, 2008) as in Fig.
1ac.
Obviously, numerical formulas of Eq. 46
are always subject to evaluation with unsatisfactory accuracy and possible errors.
An approximate solution of numerical formula using FEA gives the high accuracy
and stability when adaptive mesh points are considered as in Fig.
1 to overcome the problem of discretization error. The amount of this error
depends on four factors: order of shape function, size of elements, shape of
elements and arrangement of elements in the domain (ElEmam,
2006; ElEmam and Abdul Shaheed, 2008).
The numerical solution is highly confident in comparison with experimental
results introduced by (Glowinski and Neittaanmaki, 2008).
Unfortunately, this solution needs more time regarding the amount of computation
for Eq. 46 to find fluid flow behavior
(patterns) for all ranges of Re (04500). Consequently, the adaptive incremental
loading is effectively introduced in this study to reduce the number of patterns.

Fig. 1: 
(ac) Three types of symmetrical double steps channels with mesh points 
The numerical solution of uvω functions on channels such as those shown
in Fig. 1 requires the following boundary conditions:
Condition 1: ∀ (x, y) ∈ AB (inlet), Dirichlet (C^{0}–BC)
is used to find u (x, y) and v (x, y) as follows:
Condition 2: ∀ (x, y) ∈ BCCDDEEFFG (Lower solid boundaries)
and by using Dirichlet (C^{0}–BC), we have:
where, ω (x, y ) is not specified at this side of boundary. FEM is used
to find its approximation value.
Condition 3: ∀ (x, y) ∈ GH and by using C^{1} B.C
and C^{0} B.C for u (6, y) and v (6, y), respectively, we obtain:
where, (n) refers to the normal boundary direction.
Condition 4: ∀ (x, y) ∈ AH and by using C^{1} B.C
and C^{0} B.C for u (x, 1) and v (x, 1), respectively, we have:
Equations 1214 are obtained from Eq.
46 with the use of Galerkin weighted residual method
based on adaptive incremental loading (Glowinski and Neittaanmaki,
2008; Kaczmarczyk and Waszczyszyn, 2005), where,
N_{1i}, N_{2i} and N_{3i} are linear Lagrange polynomials
(weighted functions) for the quadrilateral elements (Vodinh
et al., 2005) in the channel’s domain Ω.
FEA with modified Newton’s method are used to find the variationvectors
δu, δv and δω (Eq. 1517).
The δRe is the adaptive incremental loading and its value is changed from
pattern to next patterns according to the value of ,
where, R_{1i}, R_{2i} and R_{3i} are the residuals at
node i.
Substituting the shape functions to obtain the following:
Where:

Fig. 2: 
Cooperation between FEA and ANN 
and
Where:
and
Where:
In this study, simultaneous solvers on the numerical model represented by Eq.
18, 22 and 26 are implemented to find
the fluid flow behavior as an input pattern to the proposed three stages (Fig.
2).
NEURAL NETWORKS MODELING
For nonlinear rheological phenomena, neural networks approach is promising
as an alternative technique, where a neural network might consists of a large
number of highly interconnected neurons. A fully connected multilayered neural
network is presented in this study. Basically, there are four keyparameters
that characterize a neural net architecture: the first is the number of layers,
the second is the number of neurons at each layer, the third is the kind of
connectivity among layers and the fourth is the kind of activation function
used within each neuron (Golcu, 2006; Kaczmarczyk
and Waszczyszyn, 2005; Payne, 2006). Neural network
can in principle have any number of layers; each consists of various number
of neurons. The most common neural network architecture which is comprised of
3 layers (Hovakimyan et al., 2002) (Input, Hidden
and Output) is used in this study.
Figure 3 shows simple 3layers (npm) neural network architecture
with (n) neurons in the input layer, (p) neurons in the hidden layer and (m)
neurons in the output layer.
Usually, the number of neurons in the input layer is equal to the number of
the available features. The present training algorithm uses Re^{k+1 }and
the velocity functions uv for Re^{k} as an input features, where, k
is the current iteration index.

Fig. 4: 
Neural network layers with BPA 
The number of neurons in the output layer equals
to the output features represented by the velocity profile for Re^{k+1}. On the other hand, the number of neurons in the hidden layer needs to be adjusted
during training. Usually, there is a tradeoff between accuracy of the output
and number of neurons in the hidden layer. Complex problems require a large
number of neurons at the hidden layer and the kind of connectivity among layers
can be full or partial (Frean et al., 2006).
In the proposed training algorithm, a fully connected architecture is applied
where each neuron in a layer is connected to all neurons in the next layer as
shown in Fig. 4.
TRAINING OF COMPRESSIBLE FLUID FLOW
The training process presented in this paper employs backpropagation with
ANN through three steps: the feedforward of the input training pattern, the
backpropagation of the associated error (BP) and the adjustment of the weights
(ADJ). Figure 4 shows the (npm) neural network, where the
solid arrow refers to manytoone or onetomany transitions, the doted arrow
refers to onetoone transition and the dashed arrow refers to send action for
adjustment process.
During feedforward step, each input neuron I_{i}, i =1…n receives
an input signal and broadcasts this signal to each of the hidden neurons H_{j},
j =1… p, as in Eq. 30.
Each hidden neuron computes its activation and sends its signal to each of
the output neurons O_{k}, for k = 1…m. Each output neuron O_{k}
computes its activation as in Eq. 31 to form the output signal of the network.
The type of activation function implemented in this work is bipolar sigmoid
working in the range [1, 1]. This function is given in Eq. 32 and its first
derivative is shown in Eq. 33.
The first derivative of the error factor represented by Δ_{k}, k =
1…m (Eq. 34) is computed to show the associated error for
the specific pattern at the output layer (Meybod and Beigy,
2002; Hovakimyan et al., 2002). This error
is used to adjust the weight W_{jk} between the hidden neuron H_{j}
and the output neuron O_{k} as illustrated in Eq. 35,
where β ∈[0, 1] is the damping parameter.
Similarly, the first derivative of the error factor Δ_{j}, j =
1,…, p (Eq. 36) is computed to show the associated error for the specific
pattern at the hidden layer. The adjustment to the weight V_{ij }between
input neuron I_{i }and hidden neuron H_{j} is based on the factor
Δ_{j} and the activation of the output neuron as illustrated in
Eq. 37.
The adjustment on the weight function is defined in Eq. 3839.
In this study, β = 0.1 is assumed and the Adaptive Learning rate AL is used
to improve the speed of training by changing the rate of learning ∏± during
training process as shown in Eq. 40a (ELEmam,
2006; Lefik and Wojciechowski, 2005; Meybod
and Beigy, 2002; Golcu, 2006; Maira et al.,
2002).
The training is repeated several times to update the old values of the two
dimensional arrays V and W until convergence criteria given in Eq.
40b is satisfied.
In addition, the present study introduces a new technique to improve the training
process. This technique is implemented to reduce the effect of errors and speed
up training by using DC. The proposed algorithm performs two clustering levels,
the first level includes three clusters depending on the type of flow (laminar,
transition and turbulent) while the second level (subclusters) depends on the
following:
• 
The average of all angles θ_{3} for all elements
in the specific pattern (Eq. 43), where θ_{3} angle represents
an inclination of velocity from the element’s direction as in Fig. 5 
• 
The patterns ordered in ascending order within each subcluster according
to the value of
(Eq. 42ab). In the next section,
this point is discussed to show the effect of sorted patterns on the performance
of the training algorithm 
Figure 5 shows inclination of an element represented by ξ–η
axis with respect to the XY axis and velocity direction V, where θ_{1}
is the angle between ξaxis and X axis, θ_{2 }is the angle
between velocity vector V and Xaxis and θ_{3 }is the angle between
ξaxis and velocity vector V.
Figure 6a shows a finite state automata transition graph
to describe formally the first phase of the present training algorithm, where,
T is the transition, S is the smoothing error and N is moving to the next pattern.
In this study, three clusters denoted by C_{i} i = 1, 2, 3 are proposed.
Each cluster includes a number of selective appropriate patterns corresponding
to diverse Re, which is equal to m’, m” and m”’
patterns for laminar, transition and turbulent clusters, respectively.

Fig. 5: 
Velocity inclination for one element 
Table 1: 
Clusters’ intervals 

These patterns are selected by using FEA with adaptive incremental loading
on specific type of double steps channel. In the next section, we discuss that
a few number of patterns is enough to produce an effective learning system for
compressible flow. The proposed intervals for each cluster used in this work
are shown in Table 1.
Figure 6b shows the finite state automata transition graph
for the second phase of the training algorithm, where the dashed state refers
to the final state of transition graph. In addition, the transition label MCE
refers to maximum error of clusters (Eq. 44a, b),
the label ME refers to the maximum error of neurons (Eq. 44c)
and NF refers to the next value of
(Eq. 42b). Additionally, the pattern
in Fig. 6b represents the j^{th} pattern in the i^{th}
cluster and s^{th} subcluster for all i = 1, 2, 3 and s = 1, 2, 3.
The numbers of patterns in subcluster 1 for laminar, transition and turbulent
clusters are equal to N’, N” and N”’, respectively;
M’, M” and M”, respectively for subcluster 2 and
K’, K” and K”’, respectively for subcluster
3, where:
Equation 42a is used to compute the error Γ_{e}
for each element in the channel domain, where this error depends on Re, mesh
size, channel shape and inclination of velocity direction to the grid.


Fig. 6: 
Transitions automata graph for the (a) 1st phase of training on three
clusters and (b) 2nd phase of training algorithm 
The maximum value of Γ_{e} is reached when the angle between velocity
vector V and Xcoordinate axis is equal to ρ π4.
THE TRAINING ALGORITHM
The training algorithm is implemented through two phases. The first phase is
to find appropriate patterns with their training process and the second phase
is to reduce the effect of errors. The second phase requires DC subalgorithm
to apply hierarchical clustering (He et al., 2005;
Silvestre et al., 2008) for each cluster. This
kind of clustering is worked iteratively to agglomerate patterns into three
subclusters and it is based on two steps: the first is used to find the amount
of flow deflection for each pattern while the second is to sort patterns in
ascending order at each subcluster according to .
The two phases of the training algorithm and the subalgorithm are presented
below:
First phase of the training algorithm: Find appropriate patterns with
their training process as the following steps:
Step 1: 
Let q_{1} be the index of external iterations for
training clusters 

Let q_{2} be the index of internal iterations for
training patterns 

Let u be the size of the current cluster 

Let ρ be iteration’s index for training patterns 

Let σ be the maximum number of iterations for training
patterns, which is large enough 

Let ∏ _{max1 }be maximum number of iterations
for the external loop (from cluster to next cluster) 

Let ∏ _{max2} be maximum number of iterations
for the internal loop (from pattern to next pattern) 

Let m’ = m” = m”’ = 1// the
initial setting of the 


number of patterns for each cluster 

Let δRe^{1} = 50; 
//the initial value of the 


incremental loading 
Step 2: 
For (q_{1}=0; q_{1} < ∏ _{max1};
next q_{1}) 
Step 22: 
If ( i = 1) then u = m’; 


elseif (i = 2) then u = m”; 


elseif (i = 0) then u = m”’; 
Step 23: 
For (q_{2}=0; q_{2} < ∏ _{max2};
next q_{2}) 
Step 231: 
j = (q_{2}+1) mod u; 
// since there are u patterns in the cluster 
Step 232: 
Implement FEA to find for
Re^{j+1} (Eq. 2629) 
// if it is not calculated before 
Step 233: 
For ( ρ =1; ρ <= σ; next ρ ) 
Step 2331: 
Apply training on ; 
Step 2332: 
Update the value of weight functions (Eq. 3839),
with the smoothing (Eq. 35, 37) for the cluster C_{i}. 
Step 2333: 
Using Eq. 40b to check the convergence
of the training process for the pattern 
Step 23331: 
If convergence is satisfied then break training. 
Step 234: 
Implement the proposed adaptive incremental loading: 

Step 235: 
Increment by one the number of patterns for the current C_{s}
{m’,_{ }m”, or m”’} 
Step 3: 
Set the final number of patterns to each cluster, m’,
m”, m”’, (Eq. 41). 
Second phase of the training algorithm: Using DC subalgorithm to reduce
the effect of errors as the following steps:
Step 1: 
Call DC subalgorithm (it is presented below). 
Step 2: 
From clusters domain, select the ith cluster C_{i}
that has a maximum sum of subcluster’s error (MSE) (Eq.
4344) 
Step 21: 
From subcluster SC at cluster i, select subcluster indexed
s that has a maximum sum of neuron’s error (Eq. 44ab). 
Step 211: 
For each pattern
at the subcluster SC_{s}, find the maximum sum of neurons’
errors Eq. 44b. 
Where:
Step 212: 
Apply training on the jth pattern 
Step 3: 
If clusters’ errors are not flat then go to step 2. 
DC Subalgorithm: Apply hierarchical clustering for each cluster as
the following steps:
Step 1: 
For each cluster C_{i}, i = 1, 2, 3 and their subcluster
SC_{s}, s = 1, 2, 3 
Put SC_{s }={ } ∀s 
Step 11: 
For every pattern 
Step 111: 
For every element e in
e = 1…100 
Step 1111: 
Find the angle θ_{3} (Eq. 45). 
Step 112: 
Find Av (θ_{3}) (Eq. 46) 
Step 2: 
Using
Eq. 42ab to sort patterns for each
subcluster s. 
RESULTS AND DISCUSSION
To construct a learning system for compressible fluid flow on twodimensional
channels as shown in Fig. 1, it is necessary to find number
of patterns for each cluster and subcluster for those channels. These patterns
are selected through the training system stages (Fig. 2).
The numerical model using FEA with adaptive incremental loading is constructed
from Eq. 18, 22 and 26
and executed them simultaneously to achieve these patterns.
The symmetrical double steps channel’s domain is divided into finite elements
using adaptive mesh point technique (ElEmam, 2006;
ElEmam and Shaheed, 2008) as shown in Fig.
1. This technique is used to provide an adequate mesh point refinement at
the critical region of the specific channel. We suggest that the sufficient
number of elements used in this study is equal to 100 elements.

Fig. 7a: 
Symmetrical double backward step channel with expansion ratio = 2/3 

Fig. 7b: 
Recirculation length in a single step channel vs Reynold No. 
It is important to check if the proposed numerical simulation using FEM with
adaptive incremental loading leads to physically coherent and accurate results.
As a consequence, the proposed approach is compared with the existing experimental
and numerical literatures (Glowinski and Neittaanmaki, 2008;
Vodinh et al., 2005). This comparison is implemented
on steps channel with expansion ratio equal to 2/3 (Fig. 7a)
to show the variation of the length of the separated region as function of Re
numbers. The agreement appears to be fairly good over all Re numbers as shown
in Fig. 7b.
In addition, the dependence of the recirculation length (normalized with respect
to the step height) on Re is shown in Fig. 7b and it is apparent
that the ratio X/h increases almost linearly with Re, reaches a peak value (~7)
at laminar regime conditions and reaches a constant value (~14) at transition/turbulent
conditions.
This study concentrates on the training system and percept the essential problems that are playing basic role on the training efficiency. Such problems are evaluated and removed eventually through AL with DC technique. In this section, we define a set of factors to speed up training system and to reduce the effects of errors.
Channel geometry factor: The training system is applied on three types
of channels that were shown in Fig. 1. The channel geometry affects the speed
of training as shown in Fig. 8, in which we observe that channel 3 consumes
a minimum number of iterations due to the fully slopped enlargements of the
channel shape. This type of channel makes two vortices near the steps are slipped
to the outlet with minimum mixing, whereas channel 2 consumes a maximum number
of iterations due to the partially slopped the channel’s steps which generates
higher velocities near the wall and push vortex region into the center of channel
to generate the maximum mixing. Reduction of errors is achieved when the training
is forwarded through clusters starting from the laminar regime.

Fig. 8: 
Speed of training on three types of channels 



Fig. 9: 
Speed of training with /without AL on (a) channel 1, (b) channel 2 and
(c) channel 3 
Rate of learning factor: For training process, two approaches are proposed.
The first is based on using adaptive learning rate (AL) on the learning factor
∏± according to Eq. 40a, where, the values of parameters
K and γ are estimated to 0.015 and 0.8, respectively. The second concept
is to fix the value of learning rate ∏± Figure 9ac
show that the speed of training process with AL is more stable and faster than
the training without AL.
The initial rate of learning factor: This factor is very important to
speed up training process and to get high accuracy too. Accordingly, we study
the behavior of the present training process with respect to the number of iterations
and errors distribution. Figure 10 shows the speed of training
process with respect to different values of learning rate (∏±) on three
channels. The maximum peak is appeared at (∏± = 0.1) while the minimum peak
appears at (∏± = 0.8). The best result is reached when (∏± = 0.7) is
selected due to the minimum error distribution (Fig. 11).
Using DC factor: In the second phase of the proposed training algorithm,
DC is used effectively to reduce the effects of errors and the order of patterns
for each subcluster. These are playing basic roles on the amount of damping
error, so that it is necessary to sort patterns in ascending order according
to
for each subcluster. On the other hand, DC is not promising when patterns are
selected randomly or sorted according to Euclidean distance presented in Eq.
47a, where, I^{P} and O^{P} are the input and output signals
of the pattern P.
Figure 12ac show the comparison between
sum of pattern’s errors and rate of learning with/without DC and with/without
sorting patterns for each subcluster in the three channels.
In addition, the learning process with DC is worked well when a pattern with
a maximum total error (Eq. 43, 44) is selected rather than selecting the one that includes neuron with maximum
error (Eq. 47ac).

Fig. 10: 
Speed of training process with respect to the rate of learning 

Fig. 11: 
Sum of pattern’s errors with respect to the rate of learning 
Figure 13ac show the level of error
if the pattern is selected according to Eq. 43, 44
and Eq. 47bd for the three channels.
It appears that the error factor is oscillated through the steps of dapping
error, whereas training process is more stable if pattern is selected according
to Eq. 43, 44. In addition, the error
remains the same for all types of channels when Eq. 43, 44
are implemented on the contrary to that implementing Eq. 47bd,
which show instability and changes from channel to another. It is obvious that
the maximum error occurs at channel 2 while the minimum error occurs at channel
3.
Fluid flow behavior and number of patterns factor: The behavior of the
fluid flow is also affected when patterns are selected by using adaptive incremental
loading γRe. Figure 14a shows the number of patterns
for each cluster at each channel and Fig. 14bd
show the number of patterns for each subcluster at the three channels. It appears
that channel 2 needs a maximum number of patterns equal to 223 that are apportioned
on three clusters C_{1, }C_{2} and C_{3 }with 140, 43
and 40 patterns, respectively. While channel 1 needs 137 patterns; these patterns
are apportioned on three clusters C_{1, }C_{2, }C_{3 }with
70, 37 and 30 patterns, respectively. Finally, channel 3 needs a minimum number
of patterns equal to 104; these patterns are apportioned on three clusters C_{1,
}C_{2, }C_{3 }with 43, 31 and 30 patterns, respectively.


Fig. 12: 
Comparison between sums of pattern’s errors vs. rate of learning
for (a) channel 1, (b) channel 2 and (c) channel 3 


Fig. 13: 
Comparison between methods of selecting patterns for (a) channel 1, (b)
channel 2 and (c) channel 3 


Fig. 14: 
No. of patterns for each (a) cluster at each channel, (b) subcluster
at channel 1, (c) subcluster at channel 2 and (d) subcluster at channel
3 
COMPUTER SIMULATION RESULTS
The proposed learning system calculates the velocity of the compressible flow
through double steps channels for various values of Re in the interval (1004000).
Figure 15ad17ad
show these results. It appears that small vortexes exist at the steps of channels.
As Re increases, the vortices region extends in length and the upper vortex
spins over the lower vortex. The interaction of two vortices diverts both of
them slightly upward into the center of channel especially for channel 2. Strong
interaction has appeared when the value of Re is increased.




Fig. 15: 
(ad) Velocity profile for compressible flow through channel
1 (Re = 1004000) 




Fig. 16: 
(ad) Velocity profile for compressible flow through channel
2 (Re = 1004000) 




Fig. 17: 
(ad) Velocity profile for compressible flow through channel
3 (Re = 1004000) 
In addition, the new small vortices are generated between two main vortices
especially for channel 1 and channel 2.
CONCLUSION
The present study demonstrates a new training algorithm based on two phases
through a successful ANN with DC. The proposed algorithm is used to predict
compressible fluid flow through specific type of channels geometry (Fig.
1). FEA with a new approach named adaptive incremental loading is utilized
at the first phase of the training algorithm to prepare appropriate patterns.
This studies simultaneously beside a training system using ANN with DC. The
proposed structure of the neural network, which works through back propagation
algorithm, includes one input, one hidden and one output layers. It has been
shown that it is possible to use few patterns to simulate data with reasonable
accuracy for three clusters (laminar, transition and turbulent) that are represented
by velocity profile function and it gives encouraging results in many fields
of applications (fluid mixing, fluid agitation, etc.) (Yu
and Morales, 2005; Rajkumar and Bardina, 2002; Creuse
and Mortazavi, 2004). The success of the proposed training algorithm can
be attributed to three factors. The first is the employment of neural network
that is an excellent approximator to any type of flow through any type of channel
geometry (regular or irregular shapes) specially, if training process starts
from laminar flow patterns (cluster 1). The second factor is the speed of training
due to the use of ANN with proper value of AL on the minimum number of selected
patterns and the third factor is satisfying the stability and the accuracy for
high range of Re due to the use of MASE and DC based on .
Finally, a channel’s shape is playing basic role on the speed of training,
accuracy of results and the number of patterns.
ACKNOWLEDGMENT
Authors would like to thank Prof. R.H. Al Rabeh from Cambridge University for his support and help. This support is gratefully acknowledged.