Enterprises are increasingly virtualizing IT infrastructure by migrating storage, application, and database servers into cloud/hosted datacenters. As they do so, they need to partner with ISPs and service providers to establish reliable, performance assured, bandwidth optimized connections between each enterprise and data center location.
In many cases, enterprises aren't getting what they paid for - often not even close. It's common to measure actual throughput at 25% of purchased link capacity. Bandwidth Performance Optimization (BPO) techniques originally developed for latency-sensitive applications like financial trading networks, 4G mobile backhaul assurance and network-to-network interconnect can recover the missing 75% for less than the price of a basic server. Too good to be true?
Bandwidth profiles used for large enterprise connections are normally based on the Metro Ethernet Forum (MEF) service definition. These services conform to a bandwidth profile with a Committed and Excess Information Rate (CIR, guaranteed bandwidth; EIR, best effort bandwidth).
These bandwidth 'envelopes' are policed at the service edge using regulators: any traffic exceeding these predetermined thresholds is dropped, resulting in random packet discard that has no preference to low or high priority traffic. This "crush the edge" technique is effective in preventing bursts of client traffic from entering the providers' network, and is easy to implement.
Open the Window
Traffic is predominantly transmitted using TCP. The TCP protocol requires that every frame is accounted for, with a receipt acknowledgement required to confirm transmission success. However, if the sender waited for each individual packet to be acknowledged before the next packet was sent, throughput would be greatly impacted, especially over large area connections.
TCP handles this problem with transmission 'windows' - a collection of frames sent together with the expectation that they will all arrive without loss. The size of TCP transmission windows sent adapts to the success of previous windows. If a packet is lost in a window, all packets after the lost packet are retransmitted, and the window size is reduced by roughly half. When windows are successfully received, the window length slowly increases at first, then more rapidly with continued error-free transmission.
If packets are regularly lost, the window length will never increase to the size required to achieve full link utilization. The mismatch between port (media) speed and the CIR of a link ensure that this issue is ubiquitous: and means usable throughput is impacted by up to 75% in most cases.
Bring Your Own Optimization
WAN-Op methods are largely ineffective in these applications. Any techniques that use compression and caching don't work without a far-end appliance to 'unwrap' the traffic - and in this case the far-end is a data center owned by someone else.
Standard traffic shaping is unable to effectively smooth out most bursts, as many occur on a millisecond time-scale (micro-bursts), and the granularity of most shapers is not sufficient to process traffic at this speed. However, an emerging alternative called micro-shaping - optimizing bandwidth on a per-packet basis - is now able to cost-effectively groom micro-bursts in a lossless manner.
Micro-shaping is a single-ended technique that combines hierarchical QoS (H-QoS) mapping with 1 ppm+ packet processing, normally performed in a small appliance with a programmable Field Programmable Gate Array (FPGA) processor instead of merchant silicon.
On a Gigabit Ethernet link, 1 ppm means the processor is running 5x faster than the rate at which packets are received. This processing speed is 1,000x faster than millisecond-length micro-bursts, allowing lower-priority packets to be precisely interleaved into flows where instantaneous capacity is not fully used by higher-priority streams. The result is best-possible bandwidth capacity utilization (fill) without the packet discard associated with more 'lumpy', coarse shaping techniques. H-QoS flow prioritization ensures no latency is added to priority traffic.
Standardized by the MEF 10.3 specification, H-QoS is highly efficient approach to per-flow prioritization and queuing: a bandwidth 'envelop' is intelligently shared between all flow priorities. CIR is consumed hierarchically - any higher-priority flows' unused CIR is passed to the next lower priority flow, and so on, until all flows have maximized the use of the total service CIR. Any remaining CIR in the envelop is added to the available EIR, and the same process is repeated.
Reap the Bandwidth
Micro-shaping's combination of H-QoS priority queuing and ultra-fast packet processing is potent.
In a simple test, Speedtest.net was used to measure uplink bandwidth performance over 15M and 30Mbps ISP connections. Micro-shaped up-link traffic reaches full link capacity, while unconditioned traffic uses only a fraction of the available bandwidth. Normally, the customer is getting only 25% of what they are paying for. Micro-shaping increased typical throughput by nearly 600%.
In controlled tests with precise and accurate instruments, micro-shaping's effect on bandwidth performance optimization is even more dramatic. An improvement of up to 800% can be gained when applied to TCP traffic flows - accounting for over 98% of Internet traffic since 2002 (Source: DongJin Lee, Brian E. Carpenter, Nevil Brownlee, 2011).
Too Easy? Finally, Something Simple!
The best solutions to many problems are the simplest. Compared to WAN-optimization techniques that require expensive appliances or subject to performance variation if virtualized, affordable micro-shaping-capable hardware can optimize bandwidth performance without variation or setup complexity.
Properly implemented, micro shaping-based Bandwidth Performance Optimization can significantly improve throughput in a wide variety of applications over regional, national and international networks, This technique has the highest impact where bandwidth is expensive, cannot be easily increased, or where uplink performance is critical to application responsiveness or overall QoS.
This simple method benefits the provider as well as the enterprise, with smoother traffic entering the operator's network, and and full purchased-capacity delivered to the customer. When implemented properly, it's a win-win situation with clear results everyone can easily see in the resulting service performance.
Accedian Networks is the Performance Assurance Solution Specialist for mobile backhaul and small cells, business services and service provider SDN. Open, multi-vendor interoperable and programmable solutions go beyond standard-based performance assurance to deliver Network State+, the most complete view of network health. Automated service activation testing and real-time performance monitoring feature unrivaled precision and granularity, while H-QoS traffic conditioning optimizes service performance. Since 2005 Accedian has delivered 200,000+ globally-installed platforms, including 100,000+ assured cell sites.
Scott Sumner is the VP Solution Development and Marketing, Accedian Networks