Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 45 46 47 48 49 50 < 51 > 52 53 54 55 56 57 .. 70 >> Next

If the VIP is available, the router will consider this IP address available. That is, the router knows how to send packets to this IP address. The route to this IP address propagates through the entire network cloud all the way to routers A and B, located at the edge of the network cloud. If the VIP is available on both load balancers located at different sites, the routers in the network merely see this as two different routes to the same IP address.
When client A types a URL to access a Web page, the DNS framework is utilized to resolve the domain name to VIP. When client A sends a request VIP, it traverses the user’s network to the user’s ISP network and ultimately comes into the ISP network cloud that’s hosting the VIP. The request from client A enters the network through edge router A. Router A looks through the routing table to determine which way to send the packets to VIP. It finds two different routes and picks a route based on the routing protocol algorithms. An example of such routing protocols is OSPF (Open Shortest Path First), which calculates routing cost for each path available and uses the path with the lowest cost. In the example shown in Figure 5.11, router A picks a
80
GSLB Using Routing Protocols
route that leads to load balancer A. Similarly, router B selects a path for packets from client B, and the selected path leads to load balancer B.
The Internet is a packet-switched network in which any two end points may exchange data through multiple paths. Routing protocols such as OSPF simply pick the best path for each packet and forward it accordingly.
In this GSLB approach, we are simply taking advantage of the functionality already built into the routers. The path selected by a router determines the load balancer for a given request.
There are several issues in this approach that we must pay attention to. First, a client opens and closes a series of TCP connections as the user surfs the Web site. At Layer 3, this essentially translates into a series of packets exchanged between the client and the load balancer. The routers in the network cloud see each packet independently and do not care whether it is part of one connection versus another. For the communication between the client and load balancer to work, all packets from the client must be sent to the same load balancer. That means, once the router selects a route for the first packet from the client to the VIP, the router must continue to use the same route, or the communication will break. Some routers are configured to perform load distribution of packets across different routing paths if multiple routing paths exist. For this GSLB approach to work, the routers must be configured to not perform load distribution across different paths.
In a steady state, all the paths have a constant routing cost associated with them. Routers continue to select the same path for each packet because the routing costs do not change. In the real world, this is rarely the case. If there are any link or component failures, or excessive congestion, the routing costs may change for each path. As a result, routers may suddenly shift to a different path for all packets addressed to a given destination. That means, in midstream, we may suddenly see all packets from client A start going to load balancer B. This not only breaks existing connections, but also breaks session persistence, thus losing any user context, such as shopping-cart information. If this design is used in a network that’s always in a state of flux, where routing costs for each path change often, it creates major disruptions to the user service. However, if this design is contained to a network cloud where the changes to the routing paths are infrequent, this design can work.
If the load balancer or the server fails in such a way that the VIP is not available at one of the sites, the route to the VIP is withdrawn, which is known as route retraction. The load balancer or another router at the site can have some special functionality built in, to expedite the route retraction process. The route retraction needs to propagate all the way to the edge routers in the network cloud that are selecting the routes, a process also called route convergence. The time it takes to complete the convergence depends on the size of the network and can run into several minutes if the network is large. So, it makes good sense to limit the usage of this GSLB design to networks where the convergence time is predictable and quick enough to fit into the TCP time-out mechanism. If load balancer A goes down, client A continues its attempts to establish a TCP connection to VIP. The TCP has some timers built in, whereby it retries if there is no response from the other end. If the convergence takes longer than the default TCP timeout, the user must retry by clicking the hyperlink again or by pressing the refresh button on the browser.
By default, routes are aggregated as part of routing protocols. For example, let’s assume there are 200 hosts in the subnet 141.122.10. *, where * can range from 1 through 254, all connected to router A. Other routers connected to router A in the network simply maintain one route to all of the 200 hosts. They maintain a routing entry that says any packets addressed to an IP of 141.122.10. * should be sent to router A. If a router maintained a route for each of the 200 hosts in the subnet 141.122.10. *, the router must have 200 routing entries, each pointing to router A. This will eventually lead to gigantic, unmanageable routing tables. Most routers, by default, are configured not to maintain host routes. But for the GSLB based on routing protocols to work, each router must maintain a host route to the VIP. While one can configure the routers within a limited network cloud to permit host routes, host routes are dropped at major peering points between Internet service providers, to control the number of routes maintained. Therefore, this GSLB approach can only be deployed within one ISP’s network or enterprise network.
Previous << 1 .. 45 46 47 48 49 50 < 51 > 52 53 54 55 56 57 .. 70 >> Next