INTRODUCTION
A special class of Artificial Neural Networks (ANN) is the third generation
called Spiking Neural Networks (SNN), where neuron models communicate by sending
and receiving action potentials (“spike trains”). During the last
couple of years, investigational proof has accumulated. This concept proves
that most biological neural systems encode data with the use of spikes (Hopfield,
1995). These experimental results from neurobiology have led to the study
of spiking neural networks in more detail, which employ spiking neurons as computational
units (Maass, 1997). Due to this property whenever a
fast and efficient computation is required a SNN is principally a suitable to
do so (e.g., speech recognition) where the timing of the input signals and the
firing signal carries important information. In terms of computation, spiking
neural networks have more power than both sigmoidal gates and perceptrons (Maass,
1997). The main objective of comprehending the capabilities and restrictions
of this new type of spiking neural network provides additional information for
theoretical investigation of the third generation of neural network models (Maass,
1997). The spiking neuron’s mathematical models do not provide a full
explanation of the enormously complex computational function of a biological
neuron. The computational units of the previous two generations of neural network
models are simplified models and focus on only a few concepts of biological
neurons. However, in comparison with the previous two models, they are substantially
more realistic. Specifically, the mathematical models express much better the
actual output of a biological neuron and so they allow for an investigation
on a theoretical level the potential of using time as a resource for a computation
and communication (Maass, 1997). The importance of threshold
value for learning in SNN: In the simplest (deterministic) model of a spiking
neuron one assumes that a neuron, (v) fires whenever its potential (p) reaches
a certain threshold (θ). This potential (p) is the sum of the socalled
EPSP (excitatory postsynaptic potentials) and IPSP (inhibitory postsynaptic
potentials), which result from the firing of other neurons (u) that are connected
through a synapse to neuron (v) (Maass, 1997). Furthermore,
it has been assumed that fast changes of the value of w(t) are also necessary
for computations in biological neural systems (Maass, 1997).
The existing literature on spiking neural network computations is related results
of neurobiology (Maass, 1997). The theoretical investigation
of spiking neural networks is not a new research field. In fact, it has a long
tradition in theoretical neurobiology, biophysics and theoretical physics. On
the other hand, a mathematically extensive analysis of the computational power
of spiking neural networks has not been fully investigated (Maass,
1997). Maass (1997) claims that such an analysis
will be useful in understanding the computational power in complex biological
neural systems. SNNs have turned out to be very powerful (Maass,
1997) but there is still not much known about possible learning and higher
computational mechanisms (Natschlager and Ruf, 1998).
Pham et al. (2008) claimed that in order to
keep relevant neurons active, a low threshold value was assigned initially and
increased after each training epoch in small equal steps to a preset value.
Initial threshold values were set to 60*0.5*0.7 and increased up to 60*0.5*0.83.
Here 60 is the number of input neurons and 0.5 is the average connection weight.
Pham et al. (2007) set the threshold value to
60*0.5*0.5, here 60 is the number of input neurons and 0.5 is the average connection
weight for (SOWA_SNN). Pham and Sahran (2006) the threshold
θ is a constant and is equal for all neurons in the network (Shahnorbanun
et al., 2010). The threshold θ is a constant and is equal for
all neurons in the network.
Many SNN learning algorithms have been proposed for supervised learning (Bohte
et al., 2002a; Xin and Embrechts, 2001; Pham
and Sahran, 2006; Ruf and Schmitt, 1997; Sporea
and Gruning, 2012) and clustering (Bohte et al.,
2002b; Natschlager and Ruf, 1998; Pham
et al., 2007), however as the author concern, none of them has mentioned
clearly the guidelines for selecting threshold value. They used to select it
empirically to give the best result. Threshold plays an important role in SNN
learning as it determines when the neuron should fire and the input time window
parameter plays a significant role in the SNN performance. Here the question
raised is; can one set the right and suitable threshold parameter mathematically
depending on other parameters in SNN. This study addresses this question and
findings showed that indeed one can find mathematically the threshold boundary
where the threshold cannot go beyond the boundary. The suitable threshold must
be found within this range and based on this study the average value is recommended
as it’s almost the same as proposed by Pham et al.
(2007) and Pham and Sahran (2006).
This study presents two main concepts. The first concept is to find out the
threshold boundary and select a suitable threshold value within this range,
using a mathematical model. The threshold depends on the relation between the
spike time and its boundary. The second concept is to outline the input time
window boundary. The range of the input time window is required to specify the
spike time boundary. The spike time boundary is, in turn, required to find the
threshold boundary. The different resulted issues are discussed when selecting
the input time window range, with regard to the computational cost. The latter
was used at the former. Two SNN learning algorithm S_LVQ (for classification
application) and SOWA_SNN (for clustering application) have been selected to
apply the method to find out the threshold boundary.
How may the correct threshold value for the SNN be assigned? What may the threshold
value actually be? From the threshold boundary the suitable threshold value
may be assigned from that range. Further studies on selection of the correct
threshold needs to be done in the future.
This work studies two models for learning in temporal coding SNNs to find out
the threshold boundary. The first one has been applied for classification application
and the second one for clustering application. The main purpose is to find the
threshold boundary as in Eq. 1:
MATERIALS AND METHODS
Time window parameter boundary: The time window (tw) plays a significant
role in the SNN performance (Pham et al., 2007).
Usually the time window boundary is assigned experimentally tw_{exp}ε[tw_{min.exp},
tw_{max.exp}] to get the best result with no guidelines on how to select
it and what the effects by selecting it on the computation cost of SNN learning.
The issue here is to assign the right value for the (tw) parameter depending
on the other parameters at the preprocessing stage. It is suggested to observe
the value for (tw) from the Spike Time parameter (st) which passed to the spike
response function ε(st). The Spike Response Function (SRF) is basically
a generalized leakyintegrateandfire model. This describes the biophysical
mechanisms of the neuron mainly by means of its membrane potential. In addition,
this model gives much importance to the time lap taken from the last firing
event. The model describes the state of a neuron j at time t by the sate variable
u_{j}(t) (Maass and Bishop, 2001). So the spike
time is assigned as appear in Eq. 2:
st = tw(input+delay); (st≥0)⇒
tw(input+delay)≥0⇒tw≥(input+delay) 
(2) 
The following Eq. 3 and 4 depend on Eq.
2 to specify the boundary for twε[tw_{min}, tw_{max}]
where, inputε[tc_{min}, tc_{max}] and Delayε[d_{min},
d_{max}]. where, tw_{min} and tw_{max} refereed to the
minimum and maximum value of time window respectively which specified here,
d_{min} and d_{max} refereed to the minimum and maximum value
of delay, tc_{min} and tc_{max} refereed to the minimum and
maximum value of temporal coding, respectively:
tw_{min} = tc_{min}+d_{min} 
(3) 
tw_{max} = tc_{max}+d_{max} 
(4) 
Assigning the value for tw plays a significant role in the SNN performance,
experimental assigning will cause the SNN to face different problems:
• 
If tw_{min.exp}<tw_{min} then any neuron
at the hidden or output layer will not allow to be activated at the range
(tw_{min.exp}, tw_{min}) as (st<0) for all neurons within
this range and that will affect the SNN performance. This increases the
network computational cost without any benefit 
• 
If tw_{min.exp}>tw_{min} then some of the neurons at
the hidden or output layers will not get the chance to be activated within
the range (tw_{min}, tw_{min.exp}) as (st) could be greater
than zero within this range for some neurons and that will affect the SNN
performance 
• 
If tw_{max.exp}>tw_{max} then all the neurons at the
hidden or output layers will always be activated within the range (tw_{max},
tw_{max.exp}) as (st>0) for all the neurons within this range
and that will increase the computational cost of SNN learning 
• 
If tw_{max.exp}<tw_{max} then some of the neurons at
the hidden or output layers will not get the chance to be activated within
the range (tw_{max.exp}, tw_{max}) as (st) could be greater
than zero within this range and that will affect the SNN performance 
By outlining the input time window boundary, the number of free parameters
which used to be assigned experimentally decreased by one, as the time window
would be assigned depends on the delay and the time coding parameters which
assigned by the experimenter at the preprocessing stage.
Spike time boundary: Outlining the time window shows that the (tw) parameter
should be in the range (tw_{min}, tw_{max}) as outlined in Eq.
3 and 4. The (tw) boundary depends on the temporal coding
and delay parameters and from those functions the spike time stε[st_{min},
st_{max}] is defined on a closed interval and could be specified depending
on Eq. 2 as follows:
st_{min} = tw_{min}tw_{min}
= 0 
(5) 
st_{max} = tw_{max}tw_{min} 
(6) 
The Eq. 5 and 6 define the st_{min}
and st_{max}, respectively; this will help to find out the threshold
boundary as (st) is defined on closed interval; as will come in the rest of
this study.
Threshold boundary for classification application (S_LVQ): The network
architecture for spiking learning vector quantization (S_LVQ) (Pham
and Sahran, 2006) consists of three layers (Input, Hidden and Output), is
a feedforward network, is fully connected between input layer and hidden layer,
has multiple delayed synaptic terminals and is partially connected between hidden
and output layer as shown in Fig. 1. The connection is characterized
by weight and delay value. The details of this network, which have been used
by (Pham and Sahran, 2006) for control chart datasets,
are as follows: N_{input} = 60, N_{hidden} = 24, N_{output}
= 6 and N_{sub} = 16 where N_{input} refers to the number of
input neurons, N_{hidden} refers to the number of hidden neurons, N_{output}
refers to the number of the output neurons and N_{sub} refers to the
number of the sub connection.
At the hidden layer each neuron receives (N_{input}) input from the
input layer and consists of (N_{sub}) as each synapse between input
layer and hidden layer consists of (N_{sub}) connections. This means
each neuron at the hidden layer receives (N_{input}*N_{sub})
input at a time. The synapse potential for each subconnection could be calculated
by Eq. 7:
At the hidden layer the minimum or maximum value any neuron could have is the
minimum or maximum threshold value that could be assigned to the S_LVQ, respectively;
the Eq. 8 and 9 define the θ_{min}
and θ_{max}, respectively:

Fig. 2: 
Pseudo code for calculating the maximum and minimum ε(st)
for a hidden neuron received from one input in spiking learning vector quantization
(S_LVQ) 
The value of:
and:
is calculated by applying the following pseudo code as shown in Fig.
2.
The value of N_{input}, N_{sub} and the weight boundary [wght_{min},
wght_{max}] are known. The boundary of ε(st) needs to be found.
The absolute minimum and maximum value for the ε(st) function could always
be found mathematically; as the Spike Time (st) parameter is defined on a closed
interval as showed in Eq. 5 and 6. The (st)
is defined by Eq. 10:
The spike response function ε(st) function which has been proposed by
Pham and Sahran (2006) is described by Eq.
11 where a is time constant membrane and b is time constant for synapse
and is graphically described as in Fig. 3a.
In this model stε(0, 250), i.e., the function is defined on a closed interval
and the absolute minimum and maximum value could be derived within this interval
as in the following steps:
Step 2: 
Find the critical points of ε(st) in (0, 250) as follows: 
The value of (st) will be the same whether a>b or a<b.The absolute minimum
and maximum of ε(st) does not depend on the value of time input window(tw),
time coding (Input) or delay; thus θ_{min} and θ_{max}
doesn’t depend on them too. Its minimum and maximum value depends only
on the value of a time constant for membrane (a) and time constant for synapse
(b).
Step 3: 
Calculate the value of ε(st) at the critical point; where
(a = 120, b = 20) as it has been taken by Pham and Sahran
(2006) 
At st = 42→ε(42) = 0.116.
Step 4: 
The absolute minimum and maximum value is (0.008) and (0.116),
respectively on (0, 250) and the boundary for ε(st) is as in Eq.
14: 
εε[ε_{min}, ε_{max}]
= [0.008, 0.116] 
(14) 
Step 5: 
By applying the Pseudo Code in Fig. 2 using
MATLAB to calculate the minimum and maximum ε(st) for a hidden neuron
received from one input in S_LVQ and then find out the θ_{min}
and θ_{max} as in Eq. 15 and 16,
respectively: 

Fig. 4(ab): 
The relation between threshold boundary and the spike time
(a) In spike learning vector quantization (S_LVQ) and (b) In selforganizing
weight adaptation spiking neural network (SOWA_SNN) 
Here the threshold boundary in S_LVQ is θε[0, 83.2583]. Figure
4a shows the relation between the threshold boundary and the spike time.
Hence, the initial and suitable threshold value will be within this range.
Eq. 17 and 19 are the equations suggested
to calculate the suitable threshold:
Where:
Or:
Compared the θ value which has been used by Pham and
Sahran (2006) firstly with the θ_{avg1}, both are very closed
and that supports that equation is the suitable formula to get the suitable
threshold in this case and secondly with the θ_{avg2} which its
value is far away from the one the (Pham and Sahran, 2006)
used.
Threshold boundary for clustering application (SOWA_SNN): The network
architecture for selforganizing weight adaptation spiking neural network (SOWA_SNN)
(Pham et al., 2007) consists of two layers (Input
and Output) and is fully connected between input layer and output layer as shown
in Fig. 5. The details of this network, which have been used
by Pham et al. (2007), are as follows: N_{input}
= 60 and N_{output} = 64, where N_{input} refers to the number
of input neurons and N_{output} refers to the number of the output neurons.
At the output layer each neuron receives (N_{input}) from the input
layer. This means each neuron at the output layer receives (N_{input})
input at a time. The synapse potential for each connection could be calculated
by Eq. 20:
At the output layer the minimum and maximum value any neuron could have is
the minimum and maximum threshold the SNN could be assigned respectively. The
Eq. 21 and 22 define the θ_{min}
and θ_{max} consequently:
The value of N_{input} and the weight boundary [wght_{min},
wght_{max}] are known. The boundary of ε(st) needs to be found,
the absolute minimum and maximum value for the ε(st) function could always
be found mathematically; as the Spike Time (st) parameter is defined on a closed
interval as shown in Eq. 5 and 6.
The (st) is equal to the time constant for excitatory spike response function
as defined in Eq. 23:
The spike response function ε(st) which has been used by Pham
et al. (2007) is given by Eq. 24 for clustering
application and is graphically described as in Fig. 3b:
In this model stε(0, 30), i.e., the function is defined on a closed interval
and the absolute minimum and maximum value could be derived within this interval
as the following steps:
Step 1: 
Derivation of ε(st)' 
Step 2: 
Find out the critical points of ε(st) in (0, 30) as follows: 
The absolute minimum and maximum of ε(st) does not depend on the value
of time (tw), time coding (Input) or delay (Delay). It depends only on the value
of the time constant for excitatory spike response function (a). This finding
shows that the θ_{min} and θ_{max} value is not affected
by them too.
Step 3: 
Calculate the value of ε(st) at the critical point where
(a = 35) as it has been taken by Pham et al.
(2007) 
At st = 35→ε(35) = 1.
Step 4: 
The absolute minimum and maximum value of ε(st) is (0)
and (0.98877), respectively on (0, 30), then the range for ε(st) is
as in Eq. 27: 
εε[ε_{min}, ε_{max}]
= [0, 0.98877] 
(27) 
Step 5: 
By applying Eq. 21 and 22
to calculate the sum of ε(st) for one output neuron: 
θ_{min} = 60*0*0.3 = 0 
(28) 
θ_{max} = 60*0.9887*0.5 =
29.6631 
(29) 
Then, the threshold boundary in SOWA_SNN is θε[0, 29.6631]. Figure
4b shows the relation between the threshold boundary and a spike time.
Hence, the initial and suitable threshold value will be within this range.
Equation 30 and 32 are the equations suggested
to calculate the suitable threshold:
Where:
Compared the θ value which has been used by Pham et
al. (2007) firstly with the θ_{avg2}, both are very closed
and that supports that Eq. 32 is the suitable equation to
get the suitable threshold in this case and secondly with the θ_{avg1}
which its value is far away from the one (Pham et al.,
2007) used.
RESULTS AND DESCUSSION
This study outlines the input time window boundary which leads to specify the
spike time boundary. With regard to the input time window, it is found that
specification beyond the parameter boundary affects on the computational network
cost and performance and found that the delay and the time coding parameters
play a significant role in assigning the time window boundary. By outlining
the input time window boundary, the number of free parameters which used to
be assigned experimentally decreased by one, as the time window would be assigned
depends only on the delay and the time coding parameters which assigned by the
experimenter. Many SNN learning algorithms have been proposed for supervised
and unsupervised learning (Bohte et al., 2002a,
b; Xin and Embrechts, 2001;
Pham and Sahran, 2006; Ruf and
Schmitt, 1997; Sporea and Gruning, 2012; Natschlager
and Ruf, 1998; Pham et al., 2007). However,
to the best of the author knowledge, this is the first time where a clear discussion
on outlining input time window parameter has given.
Specifying the spike time range help to determine the threshold boundary. The
results show that the threshold parameter boundary could be found for learning
in temporal coding SNN for classification (S_LVQ) (Pham
and Sahran, 2006) and clustering (SOWA_SNN) (Pham et
al., 2007) applications and suggest two formulas for assigning the suitable
threshold for each one and compare the threshold value in the original learning
algorithm with the suggested one. Results prove that the suitable threshold
for S_LVQ is θ_{avg1} as the experimental threshold value assigned
by Pham and Sahran (2006) is closed to the θ_{avg1}
value suggested in the Eq. 17 and the suitable threshold
value for SOWA_SNN is θ_{avg2} as the experimental threshold value
assigned by Pham et al. (2007) is closed to the
θ_{avg2} value suggested in the Eq. 32. In this
study, one part of the question have been solved, the threshold boundary can
be find mathematically where the threshold cannot go beyond it and then select
the suitable threshold within this range. How to select suitable threshold within
this range remain an open question and needs to be analyzed more on this issue
in the future, to see if the suitable threshold could be determined depends
on some other parameters.
CONCLUSION
It was concluded and proved that the θ_{min} and θ_{max}
value does not depend on the value of the time input window (tw), time coding
(Input) or delay (delay). It rather depends on the value of time constant for
membrane (tce), the value of time constant for synapse (tci), N_{input},
N_{hidden}, N_{sub} and N_{output} in S_LVQ learning
algorithm which has been proposed by Pham and Sahran (2006)
and on the value of the time constant for excitatory spike response function
(a), N_{input} and N_{output} in SOWA_SNN learning algorithm
which has been proposed by Pham et al. (2007).
ACKNOWLEDGMENTS
This research was carried out with support from the Science Fund Grant 010102SF0694
and Center for Artificial Intelligence, Faculty of Information Science and Technology,
UKM Bangi, Malaysia.