Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 11 12 13 14 15 16 < 17 > 18 19 20 21 22 23 .. 70 >> Next

Port-Address Translation
For our discussion, port-address translation (PAT) refers to translating the port number in the TCP/UDP packets, although port numbers may be used in other protocols too. PAT is inherent in load balancers. When
21
Direct Server Return
we bind port 80 on the VIP to port 1000 on a real server, the load balancer translates the port number and forwards the requests to port 1000 on the real server. PAT is interesting for three reasons: security, application scalability, and application manageability.
By running the applications on private ports, one can get better security for real servers by closing down the well-known ports on them. For example, we can run the Web server on port 4000, and bind port 80 of the VIP on the load balancer to port 4000 on the real servers. Clients will not notice any difference, as the Web browser continues to send Web requests to port 80 of the VIP. The load balancer translates the port number in all incoming requests and forwards them to port 4000 on real servers. Now, one can’t attack the real servers directly by sending malicious traffic to port 80, because it’s closed. Although, hackers can try to find the open ports without too much difficulty, this just makes it a little bit more difficult. As most people would agree, there is no one magic bullet to security. There are usually several things that should be done in order to enhance the security of a Web site or server farm.
Assigning private IP addresses to real servers, or enforcing access control lists to deny all traffic to real server IP addresses, will force all users to go through the load balancer in order to access the real servers. The load balancer can then enforce certain access policies and also protect the servers against certain types of attacks.
PAT helps improve scalability by enabling us to run the same application on multiple ports. Because of the way certain applications are designed, we can scale the application performance by running multiple copies of it. Depending on the application, running multiple copies may actually utilize multiple CPUs much more effectively. To give an example, we can run the Microsoft IIS (Internet Information Server—Microsoft’s Web-server software) on multiple ports. We can run the IIS on port 80, 81, 82, and 83 on each real server.
We need to bind port 80 on the VIP to each port running IIS. The load balancer will distribute the traffic not only across the real servers, but also among the ports on each real server.
PAT may also improve manageability in certain situations. For example, when we host several Web sites on a common set of real servers, we can use just one VIP to represent all the Web-site domains. The load balancer receives all Web requests on port 80 for the same VIP. We can run the Web server application on a different port for each Web-site domain. So, the Web server for www.abc.com runs on port 80, and http://www.xyz.com/ runs on port 81. The load balancer can be configured to send the traffic to the appropriate port, depending on the domain name in the URL of each HTTP request. In order to distribute the load based on the domain name in the URL, the load balancer must perform delayed binding and URL-based server selection, concepts covered in Chapter 3, sections Delayed Binding and URL Switching, respectively.
Direct Server Return
So far we have discussed load-balancing scenarios where all the reply traffic from real servers goes back through the load balancer. If not, we used source NAT to force the reply traffic back through the load balancer. The load balancer processes requests as well as replies. Direct server return (DSR) involves letting the server reply traffic bypass the load balancer. By bypassing the load balancer, we can get better performance if the load balancer is the bottleneck, because now the load balancer only has to process request traffic, dramatically cutting down the number of packets processed. In order to bypass the load balancer for reply traffic, we need to do something that obviates the need for un-NAT for reply traffic. In order to use direct server return, the load balancer must not translate the IP address in requests, so that the reply traffic does not need un-NAT and hence can bypass the load balancer.
When configured to perform direct server return, the load balancer only translates the destination MAC
22
Direct Server Return
address in the request packets, but the destination IP address remains as VIP. In order for the requests to reach the real server based just on MAC address, the real servers must be in the same Layer 2 domain as the load balancer. Once the real server receives the packet, we must make the real server accept it although the destination IP address is VIP, not the real server’s IP address. Therefore, VIP must be configured as a loopback IP address on each real server. Loopback IP address is a logical IP interface available on every TCP/IP host. It is usually assigned the address of \2I.x.x.x, where x.x.xcan be anything. One host can have multiple loopback IP addresses assigned such as 127.0.0.1, 127.0.0.10, and 127.120.12.45. The number of loopback IP addresses supported depends on the operating system.
Previous << 1 .. 11 12 13 14 15 16 < 17 > 18 19 20 21 22 23 .. 70 >> Next