Load balancers and application accelerators

Barracuda, Citrix, Coyote Point, F5, Kemp, and Zeus offerings stretch from no-frills appliances with basic load balancing to kitchen-sink solutions with rule-based traffic management, application security, and application performance optimizations; here's how to pick 'em.

Whether you call them application delivery controllers, application accelerators, application traffic managers, or just load balancers, the solutions for scaling out Web sites and improving the performance of Web applications have come a long way from their humble beginnings. The result is a great deal of choice in the marketplace, from bare-bones appliances with basic functionality to high-capacity switch-based systems that handle Web traffic in wildly sophisticated ways.

In its simplest form, a load balancer is a device that sends TCP/IP requests to more than one host, creating a cluster of servers that all present the same Web site. In fact, basic load balancing can be accomplished by adding multiple IP addresses to a host entry in the domain name service (DNS) system. However, doing this creates a blind round-robin system that will send the next request to the next IP address, whether that IP address actually has a working server on it or not and regardless of whether or not that server is the best server to handle the next request.


Read today's reviews of Kemp Technologies Load Master 1500 and Barracuda Load Balancer 340

Most advances in load-balancing technology have been aimed at ensuring that requests go to a server in the group best able to handle a request. Various algorithms decide which server gets the next request: the server loaded with the fewest connections, or the one responding fastest to a ping, for example. Vendors also offer proprietary algorithms and even agents to run on each server to provide more granular and accurate information on how heavily loaded each server might be.

The process of determining whether servers are available has also grown more complex and precise, from the basic TCP/IP ping to checking whether the server has a responsive network connection to checking whether a particular service (be it an HTTP daemon or a back-end SQL server) is running and returning a proper response to a query.

From the early systems that were built on PCs with two Ethernet cards, load balancers have evolved to include up to 24 switched Ethernet ports and custom ASICs running routing rules at gigabit wire-speed. Other systems add protection for Web servers and other types of application servers, guarding against buffer overflows, denial of service, and other hacker attacks. Still others add the ability to route incoming traffic to specialized clusters of Web servers depending on the needs of the customer, so that e-commerce requests go to one cluster while video viewing is done on another. Finally, a more recent trend is to add Web acceleration technologies, including HTTP compression, caching, and consolidation of TCP/IP requests from hundreds to a few.

The bigger question is whether you want all of these capabilities in the same box as your load balancer, when you may very well already have another box that does the same thing. SSL termination is a common addition because it simplifies load balancing. WAN optimization is also a good fit, considering that functionality needs to be outside the firewall along with the load balancer, and the right kinds of optimization can really improve the user experience, which is one of the main goals of load balancers. Other features such as firewalls, anti-spam, and site-to-site acceleration are less obviously a good fit. These are things that might be better done with separate boxes.

Join the newsletter!

Error: Please check your email address.

More about AMDArray NetworksBarracuda NetworksCiscoCitrix Systems Asia PacificCitrix Systems Asia PacificCoyote PointDXF5F5 NetworksGatewayIntelJuniper NetworksJuniper NetworksLinuxNetScalerSpeedZeus Technology

Show Comments
[]