Services are a relatively new concept in WANs. Devices and configurations were traditionally what made up a WAN, with routers, switches, load balancers, firewalls, proxy servers and other components positioned at appropriate points in the network. Enterprises have long grown accustomed to the use of appliances—or “middle boxes” to perform a single function, and the maintenance and management of these devices can be a real headache for IT teams.
+ Also on Network World: SD-WAN: What it is and why you’ll use it one day +
Service chaining first emerged as a concept for carriers and other network operators. The basic premise was that services—firewalling, intrusion detection, carrier-grade NAT, deep packet inspection—could be deployed using generic compute and storage resources at strategic points within the network, and traffic would be programmatically directed to (and through) these services as required. This may not be the most efficient path for the traffic to take from a network topology perspective, but this can often be outweighed by the efficiencies gained from deploying these services at scale.
We are now starting to see service chaining emerge as a concept in enterprise networks. As with many newer terms, various vendors have adopted it to have a meaning that relates to their own product set and capabilities. This can lead to some confusion for the enterprise. However, there are some key principles—and benefits—that apply to service chaining in the enterprise WAN.
Resources inside the WAN can be used more dynamically
Many enterprises already backhaul internet egress to headquarters or data center sites due to the placement of large firewalls, IDS/IPS infrastructure and proxy servers. In traditional WANs it is challenging to make this a flexible policy. Typically the redirection is performed using complex PAC files or a default route in the network. But suppose the requirement is “send web traffic to the best egress point based on path performance, except Office 365, Salesforce and a local banking site, which should go directly to the Internet.”
This is horrendously complicated in a traditional WAN, but it can be extremely straightforward in many SD-WAN solutions. Resources such as regionalized Internet egress points can be defined as services, then policies can be created to “chain” these services into the traffic flow for specific matching application types. Real-time path performance can be used as a factor to determine which service should be used, and the enterprise can manually adjust the ordering if required. This can enable the use of policies that are more in line with what many enterprises now demand as a result of their application mix and traffic flows.
Services outside the WAN can replace boxes inside the WAN
Using internal appliances as services is only half the story. For many enterprises, the real value comes from leveraging services outside the network to replace physical devices in data centers. A perfect example of this is the growth in recent years of public cloud-based services such as Zscaler and Cisco Cloud Web Security to provide internet content filtering and access control services that were previously possible only via on-premises solutions.
Why is this attractive to the enterprise? The reasons vary depending on the use case, but generally they include reduced cost, better performance and improved scalability.
Using cloud-based web security services like these is possible with traditional WANs, but it can be highly complex. Tunnels need to be configured and managed from each location, and various failover mechanisms need to be implemented and tested. Many enterprises give up on this due to complexity and revert to backhauling traffic to the data center before redirecting to the cloud-based service.
In a modern SD-WAN solution, the public cloud service can be defined just as easily as a private service, and policies established to determine which traffic should be directed to the service. Performance-based selection of the best service is usually possible, and failover is handled automatically. This is one of the real benefits of service chaining—matching traffic is automatically sent via the selected services before reaching their final destination.
As a result of this, we are seeing a big increase in interest for public cloud services like this as the adoption of SD-WAN services increases. The automation and intelligence in the SD-WAN layer is acting as an enabler for more advanced solutions through service chaining.
Virtualizing locally provided services—less box chaining
Finally, there are services that are very difficult to take out of an individual site without significantly compromising performance. Complex application-level firewalls are a good example, as are WAN optimization services. These services have traditionally existed as appliances stacked next to routers and other network devices at remote sites.
Most of the first SD-WAN solutions were, counterintuitively, very dependent on hardware—most vendors used proprietary hardware to act as router replacements in the absence of generic infrastructure.
However, we are seeing an increase in enterprise deployments of SD-WAN in truly virtualized environments, and there are many benefits that can be achieved from virtualizing the entire stack of branch office network appliances. This is another variant of service chaining, where Network Function Virtualization (NFV) allows virtual network edge topologies to be built that can chain services together that were previously all delivered as separate hardware. Enterprises operating in highly remote geographies are especially interested in this trend due to the challenges and cost associated with getting proprietary hardware certified and cleared through customs.
We expect to see this trend continue as enterprises become more comfortable with software-based network edge appliances, and to see the number of physical boxes “chained” together in the WAN continue to decrease. The next 12-18 months should see even more interesting developments in the enterprise WAN topology.