in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 23 24 25 26 27 28 < 29 > 30 31 32 33 34 35 .. 70 >> Next

First, Figure 3.21 shows how a two-phase commit can solve the HTTP to HTTPS transition using a shared back-end storage or database system. When the browser presses the checkout button, we must design the protocol such that the server with the shopping cart writes the information to a back-end database. When the SSL session is initiated, the SSL server gets the shopping-cart information from the back-end database and processes it. We are essentially solving the transition issue by sharing the shopping cart from one server to another through the back-end shared storage or database system. What this approach essentially does is make
HTTP to HTTPS Transition
the application stateless between HTTP and HTTPS transition. But the HTTP and HTTPS applications by themselves are still stateful and we have to use cookie-based persistence for HTTP traffic and SSL session ID based persistence for HTTPS traffic.
Figure 3.21: Using shared storage for HTTP to HTTPS transition.
Second, we can use some middleware software that makes all the servers look like one big virtual server to the application. Figure 3.22 shows such an example. The middleware is basically software that runs all servers and communicates using messages between the servers. The middleware provides a programming interface to the application so that the application can be independent of the server it is running on. A cookie is used to track the user identification or a key to a data structure that’s maintained in the middleware. Once the application gets the HTTPS request, it uses the cookie to retrieve that data structure that contains the context. Middleware gets the context from the other server transparently, if the state happens to be owned by another server.
Figure 3.22: Using middleware for HTTP to HTTPS transition.
Third, we can use some special configuration of the load balancer and the servers to accomplish the transition without losing persistence. As usual, we bind port 80 on the VIP to port 80 on each server. But instead of binding port 443 for SSL in the same way, we use one port number for each server. If we have three servers—let’s use, for example, port numbers 2001 through 2003 on the VIP for SSL. We need to bind port 2001 on the VIP to port 443 on RS1, port 2002 on the VIP to port 443 on RS2, and port 2003 on the VIP to port 443 on RS3. When the real server generates the Web page reply that contains the checkout button, it must link its own port number to that button. For example, RS1 will generate the link for the checkout button such that the browser will establish SSL connection to the VIP on port 2001. Since this port is only bound to RS1, the load balancer simply forwards the connection to RS1, thus ensuring persistence. This approach needs some configuration on the load balancer, as well as some programming changes to the server application to generate the hyperlink for the checkout button appropriately.
Finally, there is yet another approach, but it requires an additional product called SSL accelerator. SSL acceleration products terminate the SSL connections and convert the HTTPS requests to HTTP requests.
These products front-end the server farm just like the load balancer. Because the HTTPS is now translated to HTTP, the load balancer can perform cookie switching to ensure session persistence for secure traffic as well. In addition to the session persistence, the SSL acceleration products provide much better performance for SSL processing. SSL connection processing consumes a lot of computing power on general-purpose servers and does not scale very well. SSL acceleration products deploy some hardware that can assist in encryption and
URL Switching
decryption, thus increasing the number of SSL connections processed per second. Figure 3.23 shows how an SSL accelerator can be deployed along with the load balancer. The load balancer redirects all requests received on port 443 to the SSL accelerator that terminates the SSL connection and opens a new TCP connection back to the VIP on port 80 for sending the corresponding HTTP request. The load balancer distributes the requests from the SSL accelerator while maintaining session persistence based on cookies or any other specified method because now the traffic is no longer encrypted.
| SSL Accelerator | RS2
Figure 3.23: Using SSL accelerator to solve the HTTP to HTTPS transition problem.
URL Switching
Whenever we define a VIP on the load balancer, and bind the VIP to certain real servers for a given port, that means any request to the VIP on port 80 can be sent to any one of the real servers. The load balancer treats all these real servers equally in their ability to serve the content. The only consideration it uses in selecting the real server is whether the server is healthy, how much load is on the real server, and whether it should perform any session persistence. But what if all the servers are not equal? What if the content to be served is so large that each server cannot possibly hold all the content? We have to divide the content among the servers. For example, if a Web site is serving news, we may put all the U.S. news on servers 1, 2, and 3; all the Chinese news on servers 3 and 4; and all the Indian news on servers 4 and 5. We can combine servers 1, 2, and 3 into group 1; servers 3 and 4 into group 2; and servers 4 and 5 into group 3 (see Figure 3.24). The load balancer must now look at the requested URL to select a server. Since the URL is a part of the HTTP GET request that arrives after a TCP connection is established, the load balancer must perform delayed binding in order to look at the URL. Further, we need a way to specify to the load balancer how the content is partitioned across the server groups. This is accomplished by specifying URL rules or URL switching policies on the load balancer. For the earlier example, we need a rule that says* will be directed to server group 1, where * stands for anything that follows /usa, so that the load balancer can distribute all requests for* among servers within group 1.
Previous << 1 .. 23 24 25 26 27 28 < 29 > 30 31 32 33 34 35 .. 70 >> Next