Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 30 31 32 33 34 35 < 36 > 37 38 39 40 41 42 .. 70 >> Next

53
Active-Active Configuration
Let’s consider the case in which each VIP serves all applications and we use DNS round-robin to distribute the load between the two VIPs. We bind each VIP to each one of the real servers, each load balancer has a different gateway IP address, and each server can only be configured with one default gateway. If we set the default gateway for all servers to gateway IP1, then all reply traffic goes back to load balancer 1, regardless of which load balancer processed the request traffic. We must now deal with asymmetric traffic flow just as in the case of one-arm design. We have to use either source NAT to force the reply packets back through the correct load balancer, or use DSR to allow asymmetric traffic flow. One way to avoid this situation is to bind the VIPs differently. Let’s bind VIP1 to RS1 and RS2, and VIP2 to RS3 and RS4. Set the default gateway for RS1 and RS2 to gateway IP1, and for RS3 and RS4 to gateway IP4. We have effectively divided the servers into two groups with one group managed by each load balancer. All reply traffic from each server goes through the correct load balancer, avoiding the need for source NAT or DSR. If load balancer 1 fails, load balancer 2 takes over service for VIP1 as well as gateway IP1.
Another approach to active-active configurations is to share the same VIP between the two load balancers. In this approach, both units can service the VIP, but only one unit owns the VIP at any time. Only the unit that owns the VIP responds to ARP queries, but whichever load balancer receives the packets first for the VIP processes them. So, in the design shown in Figure 4.6, all the request packets for VIP1 will go to load balancer 1 first because load balancer 1 is the only one that responds to ARP for VIP1. Similarly, all request packets for VIP2 will go to load balancer 2 first. But if both load balancers can service packets for either VIP, we can set the default gateway for half the servers to load balancer 1 and for the other half to load balancer 2. This allows the reply traffic to be distributed across the two load balancers.
In order to service both VIPs, each load balancer must be aware of all sessions. Therefore, the load balancers must synchronize session information continuously to ensure consistent load balancing and session persistence for a given session. Similarly, the server reply packets may come and go through either load balancer, unless the servers are directly attached to the load balancers. Each load balancer must perform a consistent NAT and any other processing to reply packets. Using the same VIP across both load balancers has some advantages. As just discussed, we don’t have to worry about how the reply packets go back in active-active configurations. So, we don’t need to worry about having to use DSR or source NAT. Sharing the same VIP across the two active load balancers can be quite difficult when performing delayed binding. Since each packet requires sequence number modification, the load balancers must synchronize for each packet. While most load balancers support active-active configuration for different VIPs, only a few support shared active VIP between two load balancers. Therefore, we will be using network designs with different VIPs for the rest of this chapter.
Depending on the configuration we use, we may need a data-forwarding link between load balancer 1 and load balancer 2 in active-active configurations. Figure 4.6 shows two links between the load balancers, one dedicated for health checks, and the other for data forwarding. Although the specific network topology used in Figure 4.6 does not need this, there are some designs that we will discuss later that will need it. Even in the design shown in Figure 4.6, load balancer 1 can, for example, use the data link to load balancer 2 to reach the Layer 2 switch if the link between load balancer 1 and the Layer 2 switch fails. If the link between load balancer 1 and the Layer 2 switch fails, we can either have the load balancer 1 fail over, or let it continue to service VIP1 by reaching the Layer 2 switch through load balancer 2. The latter approach may give us a bit better load-balancing performance.
In the active-active design, there are several possibilities for loops at Layer 2 as shown in Figure 4.7. For example, there is a loop between the router and the two load balancers. There is another loop between the two load balancers and the Layer 2 switch. We can avoid the loops at Layer 2 by using different subnets or VLANs for servers, or the Layer 2 switch below, and for the links between the two load balancers. If we cannot avoid a loop at Layer 2, then we must run Spanning Tree Protocol (STP), which selectively blocks links to prevent loops.
54
Stateful Failover
Figure 4.7: Active-active configuration-Layer2 loops.
Stateful Failover
When the standby unit takes over, any open TCP connections will break because the standby unit does not have any state information for the TCP connections that are already in progress with the active unit. This is called stateless failover. In contrast, stateful failover is a method for the standby unit to take over from the active unit without breaking any existing TCP connections. This is not an issue for UDP traffic, since UDP, by nature, is stateless. However, even UDP sessions can break with stateless failover, if the application requires any type of session persistence. With stateful failover, the standby unit must maintain session persistence by sending all requests from a given user to the same server the active unit was sending to.
Previous << 1 .. 30 31 32 33 34 35 < 36 > 37 38 39 40 41 42 .. 70 >> Next