Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 32 33 34 35 36 37 < 38 > 39 40 41 42 43 44 .. 70 >> Next

56
Multiple VIPs
MAC-Ml
xt:
VIP HI.14965.10 MAC-M4
Gateway IP-10 10 101
Load IxkkI
Balnnrrr 1 Balarerrr 2
Standby Unit VIP 141.149 65 10 MAC-M4
(»airway IP -10 10 10 I
*=3 ■ «=*


10.10 10 10 MAC M6 RSI
10 10 10 20 MAC M7
RS2
10 10.10.30 MAC MM
KS3
10.10.10.40 \IA< M9 RS4
Figure 4.8: High availability #1.
An improvement to this design is to use the load balancers in active-active configuration, as shown in Figure 4.9. By using active-active setup, we now can access all servers from any of the load balancers. There is one active VIP and one standby VIP on each load balancer. We must pay special attention to how the VIPs are bound to real servers, and how the default gateway is configured on real servers. If we bind each VIP1 to RS1 and RS2, and VIP2 to RS3 and RS4, we don’t get any high availability. That’s because, when load balancer 1 fails, we also lose all the real servers bound to VIP1. Load balancer 2 cannot service VIP1 because the servers for VIP1 are not available. So we must bind each VIP to servers connected to both load balancers to get high availability.
Figure 4.9: High availability #2.
We can configure the default gateway for each real server to point to the load balancer that it’s connected to. For RS1 and RS2, the default gateway is set to gateway IP1. This introduces asymmetric traffic flows. If load balancer 2 sends a request for VIP2 to RS1, the reply from RS1 will bypass load balancer 2. Therefore, we must use source NAT or DSR. Alternately, we can use shared VIP between the load balancers so that any load balancer can process the reply packets.
One of the biggest issues we have dealt with in the designs shown in Figures 4.8 and 4.9 is that we lose all the servers connected to a load balancer if that unit fails. To get around this problem, the design shown in Figure 4.10 introduces a Layer 2 switch below the load balancers to connect all the servers together. Another important reason for using Layer 2 switches may be the port density and price/port available with the load-balancing products. Port density refers to the number of ports available within a given form factor or rack space. A switch with higher port density is able to provide more ports in a compact form factor, in order to minimize the amount of rack space consumed.
57
Multiple VIPs
Figure 4.10: High availability #3.
The design shown in Figure 4.10 is the same as the one discussed in Figure 4.6. But we need to address high availability for the router and the Layer 2 switch.
The design shown in Figure 4.11 improves the design by providing fault tolerance for the Layer 2 switch that connects the servers. While we had fault tolerance for the load balancer in the previous design, we would have lost access to all of the servers if the Layer 2 switch failed. In this design, it’s best to divide the real servers between the two VIPs, and set the default gateway for each server to the same load balancer as its VIP binding. So, we can bind VIP1 to RS1 and RS2, and VIP2 to RS3 and RS4; and set the default gateway for RS1 and RS2 to gateway IP1, and RS3 and RS4 to gateway IP2. This avoids any asymmetric traffic flows. Alternately, we can bind each VIP to all real servers, split the servers between the two default gateway IP addresses, and use shared VIP on the load balancers. No matter which way the reply packets flow, the load balancers will be able to process them. If we use source NAT or DSR, then we are free to bind VIPs and set default gateways in any way we like. We just need to ensure load distribution across all available paths and load balancers.
Figure 4.11: High availability #4.
While we obtained fault tolerance for Layer 2 switches and load balancers, the router still represents a single point of failure, which is addressed in Figure 4.12. We also introduce trunk groups to connect one box to another in this design. A trunk group is two or more links used to connect the two switches. A trunk group provides two benefits: scalability and fault tolerance. All the links in the trunk group are used to provide an aggregate bandwidth equal to the sum of the bandwidths provided by each link in the trunk group. If a link fails, the load is automatically shared among the other links in the trunk group. The algorithm used to distribute the load among the links and the number of links supported in a trunk group depend on the specific product used. In the earlier designs, a link failure would have rendered a load balancer or a router useless. In the design shown in Figure 4.12, we use trunk groups to alleviate this problem.
58
Multiple VIPs
Figure 4.12: High availability #5.
In the design shown in Figure 4.12, we use two routers on the top using VRRP to provide high availability. We use two VRRP IP addresses in which one IP address is actively owned by each router. We can configure load balancer 1 to point to VRRP IP1 and load balancer 2 to point to VRRP IP2 for outbound traffic. This allows distribution of outbound traffic across both the routers. Some products also allow load distribution across multiple static routes. In that case, we can define two static routes on each load balancer, one to each VRRP IP address, and distribute the traffic across both the routers.
Previous << 1 .. 32 33 34 35 36 37 < 38 > 39 40 41 42 43 44 .. 70 >> Next