This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
Software-defined networking (SDN) and network functions virtualization (NFV) promise numerous benefits, but adding layers of network abstraction come at a cost: visibility into the traffic traversing the links at the physical layer.
The migration to ever-faster networks is compounding this challenge because virtually no network monitoring, management or security tool today is capable of operating at 40Gbps or 100Gbps. Network packet brokers (NPBs), also known as network visibility controllers, address this challenge by capturing, filtering, aggregating and optimizing traffic. This enables 1Gbps and 10Gbps performance management and security systems to operate in 40/100Gbps networks.
But can these physical NPBs work effectively in virtual network infrastructures? And can NPB functionality itself become virtualized for operation with a "white box" switch in a software-defined network?
The answer to the first question depends on the requirements of the tools and protocol(s) used to create the virtual network traffic flows. Many network and application performance management and security tools require isolation of individual flows or sessions from the aggregate traffic on the physical links.
To accommodate this need, NPBs have long supported a variety of network segmentation, encapsulation and tunneling protocols, such as Virtual LANs, Multi-Protocol Label Switching and the GPRS Tunneling Protocol.
But more important than any single protocol (especially with the ecosystem of protocols growing constantly) is the ability to inspect and identify packets from Layers 2 through 7, combined with the flexibility to configure and program the NPB to strip and slice packets to optimize monitoring applications, and to support emerging protocols, such as VXLAN.
NPBs must also support other methods being used to provide visibility into traffic flows in virtualized infrastructures. For example, both Cisco and VMware virtual switches make available application programming interfaces (APIs) for the virtual mirror or SPAN ports. The vSwitch in the host server directs traffic from the virtual SPAN port to the physical monitoring infrastructure, thereby providing both physical and virtual network visibility in a single, high-performance, out-of-band visibility plane or monitoring fabric.
This configuration has the advantage of not requiring a virtual machine devoted to network monitoring, so it does not consume any precious host resources, and a fully agentless approach places no additional load on the hypervisor. But it has the disadvantage of risking packet time-stamps becoming inaccurate when the vSwitch gets overloaded, even though the virtual SPAN port load is significantly less than running the vSwitch in promiscuous mode. Despite this limitation, virtual SPAN capability does facilitate visibility into traffic among virtual applications that remains exclusively within the vSwitch.
The use of a separate physical element, such as a management NIC, to direct traffic from the virtual SPAN port avoids two additional problems with software-only approaches: full visibility cannot be assured based on the use of best-effort delivery to forward copied traffic over the same ports used for network traffic; and the use of generic routing encapsulation (GRE) to isolate the copied traffic results in the fragmentation of jumbo packets based on the need to comply with the network's maximum transmission unit (MTU).
Because virtualized networks exist as an adjunct to or overlay on a physical network, it is also important to provide physical Link Layer visibility, especially for performance management and security tools. These needs are handled by NPBs very well today, and this same requirement will continue to exist with both SDN and NFV.
NPB as a Virtual Network Function
As to the second question about somehow virtualizing NPB functionality itself, the Open Networking Foundation (ONF) announced a Sample Tap application in March 2014, and OpenFlow version 1.4 already includes a use case for configuring switches with an NPB-like functionality.
The ONF acknowledges that its Sample Tap is not meant to function in a production network, but is instead intended to be a teaching tool to help programmers gain experience with OpenFlow and Open Daylight. What is being virtualized as a part of this exercise is the NPB's configuration and control, while the actual real-time, line-rate copying and forwarding of the traffic could be handled by either a purpose-built NPB or a stripped-down NPB running in "white box" hardware.
Vendors and some users, especially at carriers and large enterprises, need to consider whether building their own monitoring systems is really more cost-effective. Implementing a sample application is a far cry from developing a matrix switch or adopting a commodity switch platform for use in production networks--especially those operating at 40Gbps or 100Gbps.
For now, switch-based monitoring systems have limitations and require significant operational changes to be deployed. Switch-based systems require NPBs to support precise time-stamping or advanced functionality, such as flow-aware load-balancing and optimization. These features are essential to enabling 1/10 Gbps tools to function effectively in 40/100 Gbps networks. Even as switch performance improves, matrix switches will continue to operate in separate hardware to keep all of the monitoring traffic out-of-band from the production network.
Organizations might capture more traffic directly from virtual hosts by utilizing a two-tier deployment where a basic virtual visibility tier aggregates ports and forwards traffic to an advanced tier that provides more in-depth visibility, sophisticated traffic grooming and/or higher performance.
At the same time, both switch-based monitoring systems and purpose-built NPBs will become increasingly open, going beyond working with SDN protocols and network virtualization labels, and exposing their own APIs. The reason for such tiering, despite the popular belief to the contrary with such an architecture, is that purpose-built NPBs are more cost-effective because they do not require the addition of software development staff or require that network architectures be abandoned in favor of something new, but not necessarily something more efficient.
In conclusion, it is clear that network solutions need to accommodate the migration to SDN and NFV as a means to lower capital and operational expenditures. During the migration, new approaches need to drive performance and cost-effectiveness without shifting those expenditures to software development teams, and without requiring costly new operating models, or the deployment of new hardware and software that merely duplicates a portion of the visibility plane.
The visibility plane created by network packet brokers is now part of a new, more efficient operating model that must itself also accommodate network virtualization just as much as the production network does. For this reason, NPBs should become an integral aspect of SDN and NFV planning, and an integral aspect of SDN and NFV planning, where the visibility plane leverages APIs in the virtualized infrastructure to deliver cost-effective solutions for base use cases, while simultaneously facilitating the use of hardware acceleration for advanced functionality, security and high-performance deployments.
Harding is vice president of products at VSS monitoring, where he is leads product management and marketing. He has been working in high technology for nearly two decades, having held leadership positions at Aruba Networks, Cisco Systems, Juniper Networks.