Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 33 34 35 36 37 38 < 39 > 40 41 42 43 44 45 .. 70 >> Next

The design shown in Figure 4.13 represents the high-availability variation for the one-arm design . This design introduces Layer 2/3 switches, as opposed to the Layer 2 switch, to connect to servers. Also, the dotted links between the Layer 2/3 switches and routers represent optional links. If we use Layer 2 switches and use the dotted optional links as well, there is a loop and we must run STP. Whenever there is STP, one must take care to block the right links to provide for optimal traffic flow. If we use Layer 3 switches and use the dotted optional links, we can avoid STP by configuring different subnets. In this design each load balancer has access to all servers. A load balancer’s failure does not affect connectivity to servers. But if we lose the Layer 2/3 switch, we lose half the servers and also the load balancer connected to the switch. But Layer 2/3 switches are generally considered less likely to fail than a server or load balancer, because there is less functionality and configuration involved in Layer 2/3 switches.
Figure 4.13: High availability #6.
There are three ways to utilize the design shown in Figure 4.13. The first approach is to use DSR. When we use DSR, we are free to bind any VIP to any server. All we need to do is ensure equal load among the load balancers by distributing the VIPs. We should set the default gateway to the VRRP IP addresses on the routers because the reply traffic does not have to go to load balancers when using DSR. It’s important that the load balancers use the link path through the Layer 2/3 switches to check the health of each other in this design. For
59
Multiple VIPs
example, if the left Layer 2/3 switch fails, load balancer 2 should detect that and immediately take over all the VIPs from load balancer 1. If the load balancers are connected directly through a private link for health check, they won’t detect the Layer 2/3 switch failures.
This design gets tricky when using private IP addresses for real servers. If we use a Layer 2 switch to connect the servers, then the routers must be configured for routing to the private IP addresses as well. We can instead use Layer 3 switches to provide routing to real servers with private IP addresses.
Second, we can bind each VIP to half the servers, and set the default gateway to the corresponding load balancer. If we bind VIP1 to RS1 and RS2, the default gateway for RS1 and RS2 must be set to gateway IP1 to ensure that the reply traffic flows through load balancer 1. If load balancer 1 fails, load balancer 2 serves both VIPs and can also provide stateful failover while utilizing all the servers for load balancing. In this configuration, the link between the load balancer and the Layer 2 switch must be appropriately sized because of the increased bandwidth requirements. The requests go from the Layer 2 switch to the load balancer and come out of the load balancer back to the Layer 2 switch on their way to the real servers. The reply traffic comes back to the load balancer, then back to the Layer 2 switch to the router on the way to the origin client. Each request and reply packet passes twice through the link between the Layer 2 switch and the load balancer. We can easily address this by using a trunk group between the load balancer and the Layer 2 switch, using higher-speed links (gigabit), or both.
Third, we can use source NAT and bind any VIP to any real server and gain complete flexibility. All the requests and replies will travel twice over the link between the load balancer and the Layer 2 switch.
It is most efficient to use DSR in this design because it provides for very high throughputs, as well as optimal traffic flows and link utilization.
So far, we have used one Network Interface Card (NIC) in each server and connected each server to a load balancer or a Layer 2 switch. When we connect the server to the load balancer, we lose access to the server when the load balancer fails. Therefore, we used Layer 2 switches to connect to the servers and make them accessible from both load balancers. Even then, we will lose access to servers if the Layer 2 switch fails. Figure 4.14 shows a design with two NICs in each server to maintain access to the server if a load balancer fails. This also protects access to the server and server availability if the link to the server or the NIC in the server fails.
RSI RS2 RS3
Figure 4.14: High availability #7.
Using two more NICs in a server warrants special attention to details on how exactly the different NICs are used. This depends on the operating system and type of NICs used in a server. Some NICs have two ports and both can be active at the same time, or one port can act as a backup for the other. Some NIC vendors may
60
Multiple VIPs
support the ability to group two NICs together as an active-standby pair or an active-active pair.
If we use an active-standby pair, we must carefully examine the conditions under which the standby NIC will take over from the active NIC. In the design shown in Figure 4.15, each server has two network interfaces. These interfaces may be on the same NIC or two different NICs. But the two interfaces are logically grouped into an active-standby pair. The active interface is connected to the active load-balancer unit and the standby is connected to the standby unit. Everything will work fine as long as the active load-balancer unit is functioning. When the active unit fails and the standby unit takes over, will the standby network interface also become active? If it does not, the standby load balancer will have no network path to access the servers. The conditions for the standby network interface to take over depend on the NIC vendor, software drivers for the NIC, and the operating system on the server. It’s important to note that the active unit may fail in different ways. An easy case is where the active load balancer may lose power because the active NIC can easily detect the loss of link status and fail over to standby. A difficult case is where the active load balancer is hung due to a software or hardware failure in a control or management portion of the unit. In this case, there will be no traffic on the active link to servers, but the link status may stay up because the port hardware on the load balancer is still okay. The standby load balancer takes over because it sees no response to the health checks from the active load balancer. If the standby network interface does not simultaneously take over, the standby load balancer will have no way to access servers.
Previous << 1 .. 33 34 35 36 37 38 < 39 > 40 41 42 43 44 45 .. 70 >> Next