in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 35 36 37 38 39 40 < 41 > 42 43 44 45 46 47 .. 70 >> Next

Communication between Load Balancers
loopback IP address. Also with DSR, we can’t get any Layer 5/7 switching such as cookie or URL switching. We can use source NAT as an alternate approach. With source NAT we don’t have to define loopback IP addresses and we can use Layer 5/7 switching. But the servers won’t see the origin users’ IP addresses because of source NAT. But that may not be an issue for some people and can be alleviated by having the load balancer log all the source IP addresses for record keeping. The design shown in Figure 4.15 is probably the safest way to go in order to use two NICs in each server in active-standby mode.
Communication between Load Balancers
Whether we use active-standby or active-active, the load balancers need to communicate with each other using some sort of protocol.
When using active-standby configuration, the load balancers need to determine which unit works as active versus standby. Depending on the load-balancing product, this may be a manual configuration or automatic negotiation between the two load-balancing units. In active-active configuration modes, each load balancer must also determine the active VIPs and the standby VIPs on each. Since the load on each load balancer is determined by what VIPs are active on it, it makes sense for a network administrator to initially divide the VIPs into two sets in which each set is active on a given load balancer. A more sophisticated approach would be to look at which load balancer can better serve a given VIP based on the ava ilable capacity on each load balancer and connectivity to real servers for a given VIP.
It’s vital that the two load balancers in a high-availability configuration have a reliable communication path between them. Directly connecting the two load balancers together through a trunk group of two or more links is a great way to ensure reliable communication, unless we are dealing with a design such as the one shown in Figure 4.13. In general, a good load-balancing product should use all available paths to reach the other load balancer if all the direct link(s) between the two units fail for some reason.
While load balancers improve server farm availability, the load balancer itself can be a single point of failure. By using two load balancers to work as a pair, we can tolerate a load-balancer failure and continue functioning. There are several design options for high availability, each varying in complexity and benefits. We must also remember that the more complex design we choose, the less reliable it’s likely to be. Complex designs are hard to implement, hard to troubleshoot, and are more subject to human errors. A simple high-availability configuration with stateful failover provides the best approach to improving server farm availability.
Chapter 5: Global Server load balancing
This chapter introduces the concept of global server load balancing (GSLB) and the driving factors behind it. We will start with a DNS primer to recap the important DNS ingredients that are essential to understanding how GSLB works at a detailed level. Next, we will look at various approaches to GSLB, including DNS-based and non-DNS-based architectures. Finally, we will cover some of the applications of GSLB and how to implement them.
The GSLB functionality may be built into the load-balancing product or may be packaged as a separate product. Foundry, Nortel, Cisco, and Radware are some of the vendors that integrate GSLB into their load-balancing products. F5 Networks is among the vendors that provides GSLB as a separate product from their server load balancer.
The Need for GSLB
There are two major factors that are driving the need for GSLB: high availability and faster response time.
We addressed server-farm availability by using a load balancer to perform health checks and transparently directing the user traffic to an alternative server in case of a failure. We addressed load-balancer high availability by using a design with two load balancers so one takes over in case the other fails. But what if we lose power to the data center where the server farm and load balancers are located? What if we lose the Internet connection because of a failure in the Internet Service Provider (ISP)? What if there is a natural disaster such as floods or an earthquake that brings down the entire data center where a Web site is operating. No matter to what extent we design high availability into the Web site design at one data center, there are certain macro-level considerations that can bring the site down. Using GSLB we can operate the Web site or another application server farm at multiple data centers and provide continuous availability by directing users to an alternative site when one site fails or the entire data center is down.
Load balancers help us address scalability by distributing the load across multiple servers. We can use more-powerful and a greater number of servers, deploy multiple load balancers, or tweak the load-distribution methods to get the best response time possible. One factor that we were so far unable to control is the Internet delay included in the response time, as shown in Figure 5.1. User response time includes client-side delay, Internet delay, and server-side delay. So far we have discussed ways to reduce the server-side delay, thus improving response time. We cannot control client-side delay much, as it depends on the last-mile access to the client and the client computer performance. Internet delay is typically a significant component of the user response time. Using GSLB, we can operate the Web site or application server farms at multiple data centers and direct the user to a location that provides the best response time, as shown in Figure 5.2. We will look at a variety of policies that can be used to determine the best location.
Previous << 1 .. 35 36 37 38 39 40 < 41 > 42 43 44 45 46 47 .. 70 >> Next