# Connection Oriented Networks - Perros H.G

ISBN 0-470-02163-2

**Download**(direct link)

**:**

**63**> 64 65 66 67 68 69 .. 181 >> Next

Most of the CAC algorithms that have been proposed are based solely on the cell loss rate QoS parameter. That is, the decision to accept or reject a new connection is based on whether the switch can provide the new connection with the requested cell loss rate without affecting the cell loss rate of the existing connections. No other QoS parameters, such as peak-to-peak cell delay variation and the max CTD, are considered by these algorithms. A very popular example of this type of algorithm is the equivalent bandwidth, described below.

CAC algorithms based on the cell transfer delay have also been proposed. In these algorithms, the decision to accept or reject a new connection is based on a calculated absolute upper bound of the end-to-end delay of a cell. These algorithms are closely associated with specific scheduling mechanisms, such as static priorities, early deadline first, and weighted fair queueing. Given that the same scheduling algorithm runs on all of the switches in the path of a connection, it is possible to construct an upper bound of the end-to-end delay. If this is less than the requested end-to-end delay, then the new connection is accepted.

Below, we examine the equivalent bandwidth scheme and then we present the ATM block transfer (ABT) scheme used for bursty sources. In this scheme, bandwidth is allocated on demand and only for the duration of a burst. Finally, we present a scheme for controlling the amount of traffic in an ATM network based on virtual path connections (VPC).

4.6.2 Equivalent Bandwidth

Let us consider a finite capacity queue served by a server at the rate of i. This queue can be seen as representing an output port and its buffer in a non-blocking switch with output buffering. Assume that this queue is fed by a single source, and let us calculate its equivalent bandwidth. If we set i equal to the source’s peak bit rate, then we will observe no accumulation of cells in the buffer. This is because the cells arrive as fast as they are transmitted out. If we slightly reduce the service rate i, then we will see that cells are beginning to accumulate in the buffer. If we reduce the service rate still a little bit more, then the buffer occupancy will increase. If we keep repeating this experiment (each time slightly lowering the service rate), then we will see that the cell loss rate begins to increase. The equivalent bandwidth of the source is defined as the service rate e at which the queue is served that corresponds to a cell loss rate of º. The equivalent bandwidth of a source falls somewhere between its average bit rate and its peak bit rate. If the source is very bursty, it is closer to its peak bit rate; otherwise, it is closer to its average bit rate. Note that the equivalent bandwidth of a source is not related the source’s SCR.

96

CONGESTION CONTROL IN ATM NETWORKS

There are various approximations that can be used to compute quickly the equivalent bandwidth of a source. A commonly used approximation is based on the assumption that the source is an interrupted fluid process (IFP). IFP is characterized by the triplet (R, r, b), where R is its peak bit rate; r the fraction of time the source is active, defined as the ratio of the mean length of the on period divided by the sum of the mean on and off periods; and b the mean duration of the on period. Assume that the source feeds a finite-capacity queue with a constant service time, and let K be the size of the queue expressed in bits. The service time is equal to the time it takes to transmit out a cell. Then, the equivalent bandwidth e is given by the expression:

where a = b(1 — r)R ln(1/e).

The equivalent bandwidth of a source is used in statistical bandwidth allocation in the same way that the peak bit rate is used in nonstatistical bandwidth allocation. For instance, let us consider an output link of a non-blocking switch with output buffering, and let us assume that it has a transmission speed of 25 Mbps and its associated buffer has a capacity of 200 cells. Assume that no connections are currently routed through the link. The first setup request that arrives is for a connection that requires an equivalent bandwidth of 5 Mbps. The connection is accepted and the link has now 20 Mbps available. The second setup request arrives during the time that the first connection is still up and is for a connection that requires 10 Mbps. The connection is accepted and 10 Mbps are reserved, leaving 10 Mbps free. If the next setup request is for a connection that requires more than 10 Mbps and arrives while the first two connections are still active, then the new connection is rejected.

This method of simply adding up the equivalent bandwidth requested by each connection can lead to underutilization of the link. That is, more bandwidth might be allocated for all of the connections than it is necessary. The following approximation for the equivalent bandwidth of N sources corrects the over-allocation problem:

**63**> 64 65 66 67 68 69 .. 181 >> Next