Download (direct link):
congestion in the network. Rather, the user is supposed to determine network congestion through a mechanism such as TCP, and adapt its transmission rate.
The following traffic parameters are specified: PCR, MCR, maximum burst size (MBS) and maximum frame size (MFS). The CLR for the frames that are eligible for the service guarantee is expected to be low. Depending upon the network, a value for the CLR can be specified.
4.3.7 ATM Transfer Capabilities
In the ITU-T standard, the ATM service categories are referred to as ATM transfer capabilities. Some of the ATM transfer capabilities are equivalent to ATM Forum’s service categories, but they have a different name. The CBR service is called the deterministic bit rate (DBR) service, the RT-VBR service is called the real-time statistical bit rate (RT-SBR) service, and the NRT-VBR service is called the non-real-time statistical bit rate (NRT-SBR) service. The UBR service category has no equivalent ATM transfer capability. Both the ABR and GFR services were standardized by ITU-T. Finally, the ITU-T ATM transfer capability ATM block transfer (ABT), described in Section 4.6.3, has no equivalent service category in the ATM Forum standards.
4.4 CONGESTION CONTROL
Congestion control procedures can be grouped into the following two categories: preventive control and reactive control.
In preventive congestion control, as its name implies, we attempt to prevent congestion from occurring. This is achieved using the following two procedures: call admission control (alternately, connection admission control) (CAC) and bandwidth enforcement. CAC is exercised at the connection level; it determines whether or not to accept a new connection. Once a new connection has been accepted, bandwidth enforcement is exercised at the cell level to assure that the source transmitting on this connection is within its negotiated traffic parameters.
Reactive congestion control is based on a different philosophy than preventive congestion control. In reactive congestion control, the network uses feedback messages to control the amount of traffic that an end device transmits so that congestion does not arise.
In the remaining sections of the chapter, we examine in detail various preventive and reactive congestion control schemes.
4.5 PREVENTIVE CONGESTION CONTROL
As mentioned above, preventive congestion control involves the following two procedures: call admission control (CAC) and bandwidth enforcement. CAC is used by the network to decide whether to accept a new connection or not.
As we have seen so far, ATM connections can be either permanent virtual connections (PVC) or switched virtual connections (SVC). A PVC is established manually by a network administrator using network management procedures, whereas an SVC is established in real-time by the network using the signaling procedures described in Chapter 5.
In our discussion below, we will consider a point-to-point SVC. Recall that point-to-point connections are bidirectional. The traffic and QoS parameters can be different for each direction of the connection.
CALL ADMISSION CONTROL (CAC)
Let us assume that an end device, referred to as end device 1, wants to set up a connection to a destination end device, referred to as end device 2. A point-to-point SVC is established between the two end devices, when end device 1 sends a SETUP message to its ingress switch (let us call it switch A), requesting that a connection is established to end device 2. Using a routing algorithm, the ingress switch calculates a path through the network to the switch that the destination end device is attached to (let us call it switch B). Then, it forwards the setup request to its next hop switch, which in turn forwards it to its next hop switch, and so on until it reaches switch B. Switch B sends the setup request to end device 2, and if it is accepted, a confirmation message is sent back to end device 1.
The setup message, as will be seen in chapter 5, contains a variety of different types of information, including values for the traffic and QoS parameters. This information is used by each switch in the path to decide whether it should accept or reject the new connection. This decision is based on the following two questions:
• Will the new connection affect the QoS of the existing connections already carried by the switch?
• Can the switch provide the QoS requested by the new connection?
As an example, let us consider a non-blocking ATM switch with output buffering (see Figure 3.10), and let us assume that the QoS is measured by the cell loss rate. Typically, the traffic that a specific output port sees is a mixture of different connections that enter the switch from different input ports. Let us assume that, so far, the switch provides a cell loss probability of 10-6 for each existing connection routed through this output port. Now, let us assume that the new connection requests a cell loss rate of 10-6. What the switch has to decide is whether the new cell loss rate for both the existing connections and the new connection will be 10-6. If the answer is yes, then the switch can accept the new connection. Otherwise, it will reject it.