Books
in black and white
Main menu
Home About us Share a book
Books
Biology Business Chemistry Computers Culture Economics Fiction Games Guide History Management Mathematical Medicine Mental Fitnes Physics Psychology Scince Sport Technics
Ads

Load Balancing Servers, Firewalls and Caches - Kopparapu C.

Kopparapu C. Load Balancing Servers, Firewalls and Caches - Wiley Computer Publishing, 2002. - 123 p.
ISBN 0-471-41550-2
Download (direct link): networkadministration2002.pdf
Previous << 1 .. 55 56 57 58 59 60 < 61 > 62 63 64 65 66 67 .. 70 >> Next

Cache Definition
A cache stores frequently accessed Web content to improve response time and save network bandwidth. Figure 7.1 shows how a cache works. When the first user, user A, types in the URL
http://www.foundrynet.com/ in the browser, the cache gets the HTTP request. Since this is the first time the cache gets the request for this page, the cache does not have the content. The cache gets the Web page from the original Web server for foundrynet.com and keeps the page in its local storage, such as memory or disk. The cache then replies to the user with the requested Web content. When user B tries to access the same Web page later on, the cache gets the request again, finds the content on its local storage, and replies to the user without having to go the origin Web server. User B gets the response much more quickly than user A and we also save the network bandwidth because the cache does not have to go to the origin server over the Internet again.
User 1
Figure 7.1: How a cache works.
It’s important to keep in mind that each Web page actually consists of multiple objects. As part of the page contents, the Web server returns URLs to all embedded objects in the page. The browser then retrieves each object, and assembles and displays the complete page.
Since caches make requests to origin servers on behalf of the end user, they are also called proxy cache or proxy servers. If a requested object is in the cache’s local storage so that the cache serves the object by itself, it’s called a cache hit. If the cache does not have the object, it’s called a cache miss. If it’s a miss, then the cache must go the origin server to get the object. Cache-hit ratio is defined as the number of hits expressed as a percentage of the total requests received by the cache. Cache-hit ratio indicates the efficiency of the cache. The higher the hit ratio, the more requests the cache serves by itself, thus improving user response time and saving network bandwidth.
Cache Types
Using the same fundamental concept of storing frequently accessed content to serve subsequent requests, caches can be used to accelerate user response time or improve the performance of the origin Web servers. Therefore, the function of caches can be broadly categorized into two types: client acceleration and server acceleration.
99
Cache Deployment
The value proposition of client acceleration is faster client response time and savings in network bandwidth. The value proposition for server acceleration is faster content delivery and savings in the number of Web servers needed because server acceleration is based on the premise that the cache is better suited to serve static content, since it is a purpose-built, dedicated device. On the other hand, Web servers are general-purpose servers that are not specifically optimized for static-content delivery. Server acceleration offloads the job of serving static content from the Web servers and allows the Web servers to focus more on generating and serving dynamic content.
Cache Deployment
Using the same fundamental concepts of caching, caches can be deployed and utilized in four distinct ways:
• Forward proxy for client acceleration
• Transparent proxy for client acceleration
• Reverse proxy for server acceleration
• Transparent reverse proxy for server acceleration
As the name indicates, caches deployed as forward proxy accelerate Internet access for clients, whereas caches deployed as reverse proxy accelerate content delivery from origin servers. Transparent proxy is typically used to indicate deploying caches for transparent client acceleration, whereby the clients are not even aware that a cache exists in the network. Transparent reverse proxy is a cache that works as a reverse proxy while being completely transparent to the servers.
Forward Proxy
Forward proxy involves deploying the cache server explicitly as the proxy server for a group of end users. Each user’s browser must be configured to point to this proxy cache. The browser uses a special protocol to direct all user requests to this proxy cache, which retrieves the content on behalf of the end user. Many enterprises use forward proxy cache deployments for client acceleration. The problem in deploying a cache as forward proxy is that each browser must be configured to point to the proxy server. However, this can be automated by running a script when the user logs in to the enterprise network. Forward proxy deployment enhances security because network administrators can permit only the proxy cache servers to access the Internet and disallow Internet access to others. So, all end users must go through the proxy server, thus hiding each end user’s actual IP address because the origin servers see the proxy cache as the end user. Another problem in deploying forward proxy caches is ensuring the scalability of a cache. You may purchase a cache that can handle 500 users, but you may have 4,000 users to serve in your network. Now, you will need to deploy eight such caches and partition the load across them. Further, since you explicitly point users to a cache, what if the cache goes down? If the cache is down, the users will lose Internet access, resulting in an availability bottleneck.
Previous << 1 .. 55 56 57 58 59 60 < 61 > 62 63 64 65 66 67 .. 70 >> Next