Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 52 53 54 55 56 57 < 58 > 59 60 61 62 63 64 .. 70 >> Next

If stateful load balancing is used, NAT on firewalls should not pose a problem. That’s because, when the TCP SYN packet exits firewall 2 and goes to load balancer 1, the load balancer sees a session between the firewall 2 NAT IP address and 208.100.20.100 and creates a new session entry in its table and maps it to firewall 2.
So, any reply packets coming back will be forwarded to firewall 2, maintaining the session persistence.
92
Addressing High Availability
Addressing High Availability
So far, we have only addressed tolerating the failure of a firewall. But with the introduction of load balancers, the load balancer itself can also be a point of failure. We can have two load balancers in place to work in active-standby or active-active mode on each side of the firewall. That makes a total of four load balancers for a high-availability firewall load-balancing design. This will drive up the total cost and also make the design a bit more complicated. Before one jumps to the high-availability firewall load-balancing design, everyone must look at the high-availability features provided by the load balancer itself compared to the firewall. For example, does the load balancer provide hot-swappable redundant power supplies, hot-swappable line cards, or redundant management modules? All of these features help increase the reliability of the load balancer and reduce the outage time if a component fails. Typically a load balancer may provide higher levels of reliability compared to a server-based firewall and this may be sufficient for some networks. But to tolerate a load-balancer failure, we can use two load balancers in place of one for improving high availability.
Let’s now understand how high-availability (HA) design for firewall load balancing works. Figure 6.11 shows the design in which there are two load balancers on each side of the firewalls. In many network deployments, there is an external router adjacent to the load balancer. While the load balancer may perform the routing function, it’s not deployed as a router. In general, load balancers will cost more per port and won’t provide as much port density and as many routing-protocol features as routers. Further, all existing networks already have a router. So, it makes sense to utilize the existing router, while spending the incremental costs on load balancers. It also makes sense to use two routers for high availability, or else the router becomes the single point of failure. Many large enterprise networks built for high availability may already have two routers that connect to two different service providers that provide Internet connectivity simultaneously.
--- Rmilci I
S'Hwotk
— Router 2
Figure 6.11: High-availability design for firewall load balancing.
As shown in Figure 6.11, we deploy two routers adjacent to each pair of load balancers to eliminate the router as a single point of failure. The routers run a protocol such as Virtual Router Redundancy Protocol (VRRP), where each router can act as a backup for the other in case of a failure. VRRP essentially provides similar functionality to Cisco Systems’ proprietary Hot Standby Router Protocol (HSRP). VRRP is defined in RFC 2338. When using VRRP, there is a VRRP IP address that’s shared by both the routers. One router acts as a master and the other acts as a backup for a given VRRP IP address. The devices around the routers point to the VRRP IP address as the default gateway or the next hop IP address. By using the VRRP IP address that’s shared across the two routers, any device that uses the router as the next hop or the default gateway will failover to the surviving router if one of the routers fails.
Active-Standby versus Active-Active
In high-availability designs, the load-balancer pair on each side of the firewall may act in active-standby mode or active-active mode. In active-standby mode, the active load balancer executes all the logic for firewall load balancing, while the standby load balancer monitors the health of the active one and remains ready to take over in the event of a failure. The active and standby load balancers may also synchronize any session information to provide stateful failover. The trick in active-standby designs is to ensure that the routers and firewalls send traffic to the active load balancer and switch over to the standby load balancer if the
93
Interaction between Routers and Load Balancers
active unit fails. This can be accomplished by running VRRP or an equivalent protocol between the two load balancers, where the routers can simply point to the shared VRRP IP address on the load balancers as the default gateway.
In active-active designs, both load balancers perform the firewall load-balancing function. In this scenario, both load balancers must perform load balancing while ensuring session persistence to firewalls. Packets for the same connection may go through either load balancer. So, it’s essential for the two load balancers to synchronize with each other. When using stateful load balancing, the load balancer can simply synchronize with one another any updates to the session table. With stateless load balancing, both load balancers must perform the hashing computations in the same manner to ensure consistent load distribution and session persistence.
Previous << 1 .. 52 53 54 55 56 57 < 58 > 59 60 61 62 63 64 .. 70 >> Next