INTRODUCTION
Voice, video and an increasing variety of data sessions require upper bounds on delay and lower bounds on rate^{14}. In the context of service provisioning, the Internet supports the guaranteed and controlledload services^{5}. While the guaranteed services class is designed for realtime traffics that need guaranteed maximum endtoend delays, the controlledload services class is designed for traffics that does not require this guarantee but are, nevertheless, sensitive to, overloaded networks and to the danger of losing packets. File transfer sessions are examples of traffics that do not need real guarantee, while realtime audio and video transfers are examples of traffics that need real guarantees. The two classes of services operate on the principle of ‘admission control’, in which a flow is setup, which must conform to the IETF’s (Internet Engineering Task Force’s) TSPEC. The admission control principle operates as follows^{57}:
• 
In order to receive either type of service, a flow must first perform a reservation during a flow setup 
• 
A flow must conform to an arrival curve of the form σ(t) = min (M + pt, rt + b). In Intserv jargon, the 4uple (p, M, r, b) is called a TSPEC or traffic specification. In order words, the TSPEC is declared during the reservation phase. Here, M = maximum packet size, p = peak rate, b = burst tolerance, r = sustainable rate 
• 
All routers (and in most cases, layer 3 switches) along the path accept or reject the reservation. With the guaranteed service, routers accept the reservation only if they are able to provide a service curve guarantee and enough buffer for lossfree operation. The service curve is expressed during the reservation phase 
Similarly, Carrier Ethernet service providers define a Bandwidth Profile, which allows the service providers to bill for bandwidth usage and engineer their networks’ resources to provide performance assurances for inprofile Service Frames. The Committed Information Rate (CIR), which is defined as the average rate in bits/sec., that a network is committed to deliver frames over a predetermined time period, is one of the parameters of this profile. Therefore, a major issue that has to be dealt with by both services’ providers and managers of organizations’ Local and/or Campus Area Networks is, agreeing on a value for this parameter when a communication session is to be set up. The challenge therefore, is: How would a flow’s r as stated previously or what is termed ρ, the longterm average rate of traffic flow for a communication session^{810} also known as the committed information rate, in the context of bandwidth profile for Carrier Ethernet be determined? We have not seen any empirical approach(s) or formula(s) for determining this parameter reported in literature; that is, there does not seem to be available, generally known method(s) or formula(s) for determining and assigning a value for this parameter. For example, the first step in the endtoend delay algorithm enumerated by Georges et al.^{11} states that: Identify all streams on each station and determine the initial leaky bucket values (ρ is one of the leaky bucket parameters). The paper did not however, give any empirical method for initially determining ρ. In this study, authors report the derivation of an empirical formula that can be used to determine this parameter.
METHODOLOGY
The method adopted in this study is neither analytical, nor experimental but emphasizes on deriving a practically, utilizable formula for computing the parameter ρ, the average rate of any datacommunication’s traffic flow; which is the same thing as CIR (Committed Information Rate). According to Cruz^{10}, if R(t) represents the instantaneous rate of traffic flow to a network element, then ∀x,y; y>x, x>0, , a cumulative function, is defined as the number of bits seen on a flow in the interval [x, y]. Furthermore, Cruz^{10} stated that:
where, Eq. 1 implies that there is an upper bound on the amount of traffic contained in any interval [x, y] that is equal to a constant σ plus a quantity that is proportional to the length of the interval. The constant of proportionality ρ determines an upper bound to the longterm average rate of traffic flow, if such an average rate exists^{10}. Taking the upper bound in Eq. 1, we have:
Equation 3 was shown^{12} to give the maximum delay of a Protocol Data Unit (PDU) through a node (switch or router):
Where:
D_{jmax} 
= 
Maximum delay in sec for a PDU to cross node j, having, N input/output ports 
C_{i,i} 
= 
1, 2, 3, …, N = bit rates of ports 1, 2, 3,…,N in bps = channel rates of input ports in bps 
C_{out} 
= 
Bit rate of the N^{th} output link in bps = output port (line) rate of the N^{th }port 
C_{N1} 
= 
Bit rate of the (N1)^{th} input port in bps 
L 
= 
Maximum length in bits of a PDU, for example, a packet, for packet switched networks, a cell for an ATM network 
σ_{j} 
= 
Maximum amount of traffic in bits that can arrive in a burst to an input port of node j 
This equation is composed of four components, which are Maximum PDU Forwarding Delay, Maximum PDU Routing or Switching delay (RSD)+Simultaneous arrivals of PDUs delay (SAD), Maximum PDU Queuing Delay and L/C_{out} = Maximum PDU Transmission Delay. The first two delays, that is, PDU forwarding delay and PDU routing and switching delay, constitutes what is generally called processing delay; which is, the time required for nodal equipment to perform the necessary processing and switching^{13,14 }of a PDU.
Eyinagho^{15} reduced Eq. 3 to 4:
where, Eq. 4 gives the maximum delay of a PDU across a node, j and 5L/2C_{out} the intercept on the delay axis (as shown in Fig. 1) gives the minimum delay (D_{min}) of a PDU across the node.
Assume that a PDU crosses Mnodes in transiting from the source of the PDU to the destination of the PDU. Let D_{1max}, D_{2max},..., D_{Mmax}, be the maximum delay of a PDU in node1, node2,... and node M, respectively as shown in Fig. 2 with:
Then the total maximum endtoend delay (D_{totmax}) is given by Eq. 5:
where, σ_{1}+σ_{2}+...σ_{M} = σ_{tot} = the total of the maximum burst traffic that can be held by the nodes on the origin to destination path.

Fig. 1:  Delay versus burst traffic arrivals to a node (switch or router) 

Fig. 2:  Origindestination of a PDU traversing Mnodes 

Fig. 3:  Burstiness evolution along a switched network 
But Cruz^{10} enunciated and proved that, if D_{max }= maximum delay offered by a network element to a traffic stream, R_{in}= rate of the traffic stream as it enters the network element, R_{out }= rate of the traffic stream as it exits the network element and R_{in}~b_{in} with D_{max}<+∞, then R_{out}~b_{out}; where b_{out}(x) is defined for all x>0 as:
As an example of Eq. 7, if:
This is known as the burstiness evolution concept. The intuition that motivates this concept is that, if D_{max} is small, the output stream will closely resemble the input stream and cannot be much, more bursty than the input stream. According to this concept, also expounded by Georges et al.^{ 11}; if the traffic entering a network is not too bursty, the traffic flowing in the network will also not be too bursty. This is illustrated in Fig. 3. Given, therefore, a method for calculating the upper bounds for delay in various types of network elements, the importance of Eq. 7 is that, it gives a basis for analyzing the network as a whole. Cruz^{10} stated further that Eq. 7 can be improved upon if the structure of the network element is known. This has already been done for switching nodes (switches and routers) as indicated by Eq. 4.
Applying (8) to Fig. 3 therefore, gives the following burstiness evolution equation:
From the set of Eq. 9, it is seen that, ρ^{1}= ρ^{2}= ρ^{3}= ⋯ = ρ^{M1} = ρ^{M }= ρ= minimum rate required to serve the flow according to TSPEC.
It is therefore, clear that (with or without traffic shaping), σ going into node 1 would have evolved to σ_{total }(σ_{tot}) at node M. Hence, if R_{in} = the rate of traffic stream into node 1 and R_{out} = rate of traffic stream out of node M (it should be noted that we are interested in a single traffic stream traversing from node 1 to node M, although there may be other traffic streams passing through these nodes) as shown in Fig. 2. If, t = total of the maximum delays of all the nodes from the source of the traffic stream to the destination of the stream = D_{totmax} and C_{out} = port rate of the output port through which the traffic exits the node M to the destination host, then from Eq. 2:
where, the interval of integration on the left of Eq. 10 is the maximum endtoend delay time from the source of the traffic stream to its destination.
Dividing Eq. 10 by C_{out}, results in Eq. 11:
It can be seen that the left hand side of Eq. 11 gives the maximum endtoend delay that a PDU will encounter from the source of the traffic stream to its destination; because, the integral gives the maximum number of bits that has traversed from the origin node to the destination node during this maximum endtoend delay time and dividing this maximum number of bits by the output port rate of node M, gives the maximum endtoend delay time.
Therefore, the left hand sides of Eq. 6 and 11 are the same quantity. Hence, comparing Eq. 6 with 11, we have:
D_{totmaxa} = oneway maximum endtoend delay of an application; for example, VOIP (Voice over Internet Protocol) application.
Equation 12 therefore, is the minimum rate (in bits/sec) required to serve the flow according to TSPEC or the CIR in the context of bandwidth profile specification of Carrier Networks (for example, Carrier Ethernet Networks). Therefore, knowing the number of nodes M, that the traffic stream of an application will traverse from the origin of the stream to its destination; L, the maximum length of a PDU and D_{totmax}, the oneway maximum endtoend delay specification of an application, ρ can be intelligently determined and communicated by the source of the flow to all routers (and/or layer 3 switches) along the route.
RESULTS
Illustrative numerical example: The basic objective of this study is to obtain the result as given by Eq. 12. The application of this equation is now illustrated by extracting the network delaybased QoS parameters for two types of applications from ITUTs Standard Y.1541. They are class 0 and class 1 real time, highly interactive applications’ like VOIP and Video Conferencing. The class 0 oneway network delay performance objective is 100 ms and that of class 1 is 400 ms. Taking the extended Ethernet frame as illustrated in Fig. 4 as our PDU, L = 7+1+6+6+4+2+1500+4 = 1530 bytes = 1530×8 bits = 12240 bits.
Tabulated ρ values for origindestination paths, with 1, 2, 3, 4 and 5 nodes (routers or layer 3 switches), denoted as ρ_{1}, ρ_{2}, ρ_{3}, ρ_{4} and ρ_{5}, respectively are as shown in Table 1, which is briefly discussed in the next subsection.
DISCUSSION
As shown in Table 1, it can be seen that, for the same maximum delay bound, the longer the distance between the source and destination of a communication flow (that is, the more the number of nodes on the origindestination path), the higher the minimum bits rate, that should be specified for the flow. The higher the maximum delay bound, the smaller the minimum bits rate that should be specified. It has been canversed^{16} that, CIR (Committed Information Rate), which is the same as, average rate of data communication traffics flows, should be carefully engineered for a network to efficiently handle it.
Table 1:  ITUT Y.1541 delaybound QoS values/equivalent origindestination pvalues 



Fig. 4:  Extended Ethernet frame format with tag field (IEEE 802.1Q field) 
No suggestion was given as to how it should be engineered as it does seems that, practically utilizable formula for engineering this parameter is not available in literature. It is in this context that this study is of prime importance.
CONCLUSION
A major issue that has to be dealt with by both services’ providers and managers of organizations’ Local and/or Campus Area Networks, is agreeing on a value for the minimum data rate required to serve a connection or flow, also referred to as, ‘Committed Information Rate’ (CIR), in the context of Bandwidth profiles of Carrier Networks or, the longterm average rate of data communication traffic flow, ρ. Generally known method(s) or formula(s) for determining and assigning a value for this parameter are not however, available in literature. So it seems, in our search so far, in the literature of computer and data communication and networking. The work as reported in this study, is an attempt to proffer a formula for computing this parameter, which to a good extent, has been achieved.
SIGNIFICANCE STATEMENT
This study has shown that, the average rates of data communications’ traffic flows can be practically computed by communicating applications in realtime. The developed formula that is reported in this paper will be important for solving one of the performance problems of computer and data communication networks. The fact that the formula is practically utilizable is very significant as no such formula(s), to the best of our knowledge, previously exists. The significance of the formula can be appreciated when we consider the fact that, understanding network performance has been more of an art than a science, that, there has been very little underlying theory which are actually of any use in practice and the best that has been done were to apply rules of thumb, gained from hard experience, using examples taken from the real world. Therefore; the approach adopted in this study sought to promote the merger of some level of mathematical elegance, with practically, important techniques. Authors developed formula for carrying out the engineering of the parameter ρ or CIR will serve as a basis and impetus for further researches on this problem, with a view of probably, coming up with a better formula or formulas.