Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 58 59 60 61 62 63 < 64 > 65 66 67 68 69 .. 70 >> Next

When a cache goes down, a simple hashing method redistributes the traffic across N - 1 caches instead of the N caches previously. This results in redistribution of all traffic across N - 1 traffic with a sudden change in how the content is partitioned across the caches, which causes a sudden surge in cache misses.
Stateful load balancing
Stateful load balancing, just as in the case of server load balancing, can take into account how much load is on a cache and determine the best cache for each request. Stateful load balancing can provide much more granular and efficient load distribution than stateless load balancing. However, stateful load balancing does not solve the problem of content duplication across caches.
Optimizing load balancing for Caches
The destination IP address hashing discussed earlier only solves the content duplication to some extent. There may be 10 Web servers for http://www.foundrynet.com/ with 10 different IP addresses, all serving the same content. Each destination IP address may result in a different hash value. So, the load balancer may send the request for the same object on http://www.foundrynet.com/ to a different cache because of the different destination IP addresses.
104
Optimizing load balancing for Caches
A new type of load balancing is required for caches to take the best of stateful and stateless load-balancing methods. Let’s discuss two such methods: hash buckets and URL hashing.
Hash Buckets
Hash buckets allow us to get over the limitations of simple hashing. The hash-buckets method involves computing a hash value using the selected fields, such as destination IP address. A hashing algorithm is used to compute a hash value between 0 and H, where His the number of hash buckets. Let’s say His 255. That means the hashing computation used must produce a 1-byte value. We can get better granularity and efficient load distribution as we increase the value of H. For example, a hash-buckets method using 1,024 buckets can provide better load distribution than one using 256 buckets.
Each bucket is initially unassigned, as shown in Figure 7.8. The first time we receive a new connection (TCP SYN packet) whose hash value falls into an unassigned bucket, the load balancer uses a stateful load-balancing method such as “least connections” to pick a cache with the least load and assigns that cache to this bucket. All subsequent sessions and packets whose hash value belongs to this bucket will be forwarded to the assigned cache. This approach requires the load balancer to keep track of the load on the cache so that it can assign the buckets appropriately.
Figure 7.8: Hash-buckets method.
If a cache goes down, only those hash buckets that are assigned to the failed cache must be reassigned, while other buckets are completely unaffected. The load balancer simply reassigns each bucket that was assigned to the failed cache to a new cache based on the load. In effect, the load of the failed cache is spread across the surviving caches without affecting any other traffic.
Again, this technique minimizes the content duplication only to some extent because hashing is performed on the IP addresses and/or port numbers, not the URL itself. However, this method can provide better load distribution than a simple hashing to the caches. If a cache goes down, the simple hashing method will have to redistribute all traffic across remaining caches, causing complete disruption of content distribution among caches. The hashing-buckets method will reassign only those buckets that are assigned to the dead cache, causing minimal disruption to other buckets. However, the hashing-buckets method is prone to certain inefficiencies as well. For example, the incoming requests may not be evenly distributed across the buckets, causing inefficient load distribution across caches. For example, if all the users are accessing http://www.mars.com/ then all requests for this request may end up on one cache while the others remain idle. To minimize the impact of these inefficiencies, the load balancer can periodically redistribute the buckets across caches based on the number of hits in each bucket. The redistribution can be graceful in the sense that existing connections will continue to be served by assigned caches, while new connections can be sent to the newly assigned cache. This requires the load balancer to track sessions, although hashing is computed for each
105
Optimizing load balancing for Caches
packet. Tracking sessions allows the load balancer to redirect only the new sessions when reassigning buckets on the fly for load redistribution, while leaving the existing sessions untouched.
URL Hashing
To eliminate content duplication among caches altogether, the hash method must use the URL of the requested object. This is the only way to ensure that subsequent requests to the same URL go to the same cache, in order to increase the cache-hit ratio and optimize cache performance. To perform URL hashing, the load balancer must do more work that includes delayed binding, much like the delayed binding described in the context of server load balancing in Chapter 3.
Previous << 1 .. 58 59 60 61 62 63 < 64 > 65 66 67 68 69 .. 70 >> Next