in black and white
Main menu
Home About us Share a book
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 47 48 49 50 51 52 < 53 > 54 55 56 57 58 59 .. 70 >> Next

Load-Balancing Firewalls
across multiple firewalls. Further, we can use firewall load balancing to deploy multiple low-end or mid-level firewalls to get the same throughput at a much better aggregate price/performance than that of one high-end super-powerful firewall.
From an availability perspective, the firewall is a single point of failure for the entire network. If we lose the firewall, we lose the connectivity between the internal network and the external network.
Some firewall products may support a feature known as a firewall cluster that consists of two firewalls, where one acts as a standby for the other unit. This improves the availability, but not the scalability. Firewall load balancing allows us to improve scalability as well as high availability by distributing load across multiple firewalls and tolerating a firewall failure.
Since the firewall product will need some amount of maintenance, firewall load balancing also helps improve the manageability. A network administrator may take a firewall out of service for maintenance work, such as software or hardware upgrades, without any disruption to the network users. The load balancer can allow a firewall to be gracefully shut down, where the load balancer stops sending any new connections to the firewall and allows the existing connections to terminate gradually.
Load-Balancing Firewalls
Figure 6.2 shows a basic firewall load balancing design commonly referred to as a firewall sandwich. To perform firewall load balancing, we need to place a load balancer on both sides of the firewalls because the traffic may originate from either side of the firewall. No matter which side from which the traffic originates, it must first go to a load balancer for load distribution across the firewalls.
Figure 6.2: Basic firewall load-balancing design.
When dealing with firewalls that perform stateful inspection, we must meet two requirements when using multiple firewalls as a cluster for load balancing. First, all packets (requests and replies) for a given connection must be sent to the same firewall. Second, all related connections that share some context must be sent through the same firewall. For example, the data and control connections in protocols such as FTP or streaming media must be sent through the same firewall. When a firewall performs stateful inspection, it looks for the associated control connection before permitting the data connection. Both of these requirements, collectively, are known as sticky connections or session persistence.
Traffic-Flow Analysis
Figure 6.3 shows how traffic flows through the firewall load-balancing configuration. One key issue we must first address is, How does a load balancer recognize the packets that should be load balanced across firewalls? In the case of server load balancing, all the servers are represented by a VIP configured on the load balancer. All the traffic with VIP as the destination IP address is load balanced across the servers. Other traffic is simply switched at Layer 2 or Layer 3, depending on the specific load-balancing product and its configuration. In the case of firewall load balancing, the firewall is simply a transient device. If the destination IP address in the
Load-Balancing Firewalls
packet is that of the firewall, this packet is intended to go to the firewall itself. For example, if we need to manage the firewall through a Telnet interface, the load balancer will receive the packets for the Telnet session with the destination IP address of a specific firewall. These packets must not be load balanced. Instead, they must be sent to the specific firewall, as indicated by the destination IP address. But all other packets, where the destination IP address is not that of a firewall, can be load balanced.
Traffle from the ftrrwalh Traffic toward (Irev*.ill%
Figure 6.3: Traffic flows in firewall load balancing.
At a high level, the traffic going through the load balancer can be categorized as two types: traffic that’s going toward the firewalls and traffic that’s coming from the firewalls. If the traffic is coming from the firewalls, the load balancer simply forwards them.
For a packet going toward the firewalls, the load balancer must first determine whether it is specifically destined to one of the firewalls. If so, the load balancer must forward it to the appropriate firewall based on the destination IP address. Otherwise, the load balancer can take one of two actions: Use load balancing or session persistence.
If the packet is the start of a new connection, such as TCP SYN packet, the load balancer must first check for any association with an existing connection, such as in the case of FTP. If there is an association, the load balancer must send the packet to the same firewall that has the related context. If there is no association to any existing connections, the load balancer chooses a firewall based on a load-distribution algorithm.
If the packet is a subsequent request or reply packet in an existing connection, the load balancer must send the packet to the same firewall that has the context for this connection.
Previous << 1 .. 47 48 49 50 51 52 < 53 > 54 55 56 57 58 59 .. 70 >> Next