Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 25 26 27 28 29 30 < 31 > 32 33 34 35 36 37 .. 70 >> Next

Concurrent URL and Cookie Switching
Once we use URL switching to select the right group of servers, we may need to ensure session persistence within that group. This can be accomplished by simply using cookie switching in conjunction with URL switching. For example, if we are using the cookie-read method, whereby the server inserts a cookie, the load balancer simply looks for this cookie first. If the cookie exists, the load balancer simply sends the request to the server indicated by the cookie. If the cookie does not exist, then the load balancer uses the URL switching rules to select the group of servers and load-balance the request within that group.
HTTP Version 1.0 versus 1.1
HTTP version 1.0 allows for one HTTP request/reply exchange in one TCP connection. HTTP version 1.1 includes a provision to perform multiple HTTP request/reply exchanges within one TCP connection. Obviously, the latter is much more efficient because it avoids the overhead of TCP connection setup and teardown for every HTTP request/reply exchange. However, this has an interesting side effect on URL switching. In the example shown in Figure 3.24, What if the first HTTP request is for http://www.usa.com/news/usa and the second request is for http://www.usa.com/india both over the same connection7 Then the load balancer performs delayed binding and looks for the first HTTP request. Based on the URL in the first request, it forwards the connection and the HTTP request to a server within group 1. But the second request now should go to a server in group 3, although the TCP connection is bound to a server within group 1. The load balancer can terminate the connection with the server in group 1, establish a new connection with a server in group 3, and then forward the second HTTP request. This will induce more latency and impact the performance of the load balancer because of the extra work it creates to end and set up a new TCP connection. More importantly, it may not exactly work this way. HTTP version 1.1 allows what’s
45
Summary
called pipelining. That means a client browser may send multiple simultaneous requests without waiting for the response for any of them. So the load balancer may receive the second HTTP request before the server has responded to the first one. In that case, the load balancer has two choices. First, establish a new simultaneous connection with a server in group 3 and forward the second HTTP request before getting the response to the first request. But this creates enormous complexity because the load balancer receives the response to the first request and the response to the second request over separate server-side TCP connections that must be sent over a single TCP connection to the client. This creates many challenges in the delivery of IP packets in sequence and guaranteeing the delivery of those packets. Alternatively, the load balancer may wait and send the response to the first request and then send the response to the second request after receiving an acknowledgement for the prior response.
Because of these complexities, using URL switching with HTTP version 1.1 is not recommended. It is best to avoid using URL switching unless there is a clear need for it, as outlined in the preceding sections.
Summary
Session persistence is a fundamental concept in designing and operating stateful applications with load balancers. Source IP-based persistence methods are simple and effective unless dealing with megaproxy situations. Cookie and SSL session ID-based persistence methods are effective in solving megaproxy issues. When running shopping-cart applications, one must pay attention HTTP to HTTPS transition and make sure the transition is taken care of in application design and load-balancer configuration. This chapter also covered delayed binding, a fundamental concept for load balancers to perform switching at Layers 5 and above.
URL switching provides great benefits in environments where the amount of content managed is very large, but may offer little benefit to those users who manage small amounts of Web content. But URL switching has interesting uses for Web hosting providers in using one VIP and a few real servers to serve several user domains.
46
Chapter 4: Network Design with Load Balancers
So far, we have reviewed various features and functions of load balancers to improve server-farm scalability, availability, and manageability. In this chapter, we will focus on the deployment of the load balancer into a network, and associated design choices and considerations. We will address high availability for the whole design that can tolerate failures in various network components, including the load balancers.
Before we discuss specific network topologies, we need to cover some fundamental concepts. Let’s start with the issue of whether the load balancer is being deployed as a Layer 2 switch or a Layer 3 router, as this has important implications for the network design. We then start with some simple designs that do not address high availability. Next, we discuss how load balancers work in pairs to provide high availability, before moving on to an extensive discussion of various high-availability designs and associated considerations. This chapter attempts to show the evolution of various network topologies as opposed to just presenting a specific design.
Previous << 1 .. 25 26 27 28 29 30 < 31 > 32 33 34 35 36 37 .. 70 >> Next