This vendor-written tech primer has been edited by Network World to eliminate product promotion, but readers should note it will likely favor the submitter's approach.
The majority of data center LAN links use 1000Base-T (Gigabit Ethernet) running on unshielded twisted pair structured cabling (Cat5e, Cat6, Cat6a). While increasing demand for bandwidth is driving a shift to 10G Ethernet, that migration is being slowed by the fact that the 10G links being deployed today are based on optical transceivers or SFP+ Direct Attach copper, neither of which is backward-compatible to Gigabit Ethernet.
Rapid technology advances are reducing the price and power of twisted pair-based 10GBase-T and inevitably there will be a rapid shift from Gigabit Ethernet to 10GBase-T.
ANALYSIS: What's beyond 10G Ethernet?
While 10GBase-T is similar to Gigabit Ethernet -- both use RJ45 connectors, a structured cabling deployment model with twisted-pair wiring and 100-meter maximum spans, and 802.3 clause 14-based auto-negotiation for backward compatibility -- it also comes with some unique challenges due to the 10x higher data rate.
Providing the 10x rate required higher symbol rates, more bits per symbol and the use of a higher-performance and higher complexity forward error correction scheme in the PHY chips. These requirements translated to increased complexity, which led to high power consumption in the first generation of 10GBase-T chips.
This has been a barrier to adoption, but a combination of design innovation and semiconductor process advances have resulted in 10GBase-T chips that now operate at lower power per bit than the most efficient Gigabit Ethernet chips.
However, the higher symbol rate and the increased number of signaling levels required by 10GBase-T also increase potential vulnerability to electromagnetic (EM) interference, or EMI, from external fields. This article explains why and also discusses techniques to handle the increased vulnerability to EMI.
10GBase-T uses four-pair balanced cabling so the 10x higher rate increase comes from two reasons: an increase in the symbol rate to 800MSymbols/sec (from 125MSymbols/sec in Gigabit Ethernet, a factor of 6.4) and an increase in the number of bits per symbol to 3.25 bits/symbol (from 2 bits/symbol in Gigabit Ethernet, a factor of 1.6).
The 6.4x increase in symbol rate causes the 10GBase-T transmit signal spectrum to extend up beyond 400MHz, so the receiver has to allow an input bandwidth close to 500MHz. The bandwidth for a Gigabit Ethernet receiver was 75MHz.
This increase in the required receiver bandwidth is an increase in the window of vulnerability to external interference. A Gigabit Ethernet receiver could easily filter out interfering signals at frequencies above 75MHz but a 10GBase-T receiver cannot do this as a filter cutting off above 75MHz would kill a significant percentage of the desired 10GBase-T.
In a Base-T link, external fields create a common mode signal on the twisted-pair wire. Most of that signal gets attenuated by the choke and the transformer that is built into every Base-T Ethernet port. However, some of this gets converted to a differential signal due to imbalances between the two paths of a differential pair in the cabling, as well as connectors and magnetic components in the signal path.
Common-to-differential conversion in components tends to increase with frequency so the higher frequency range of vulnerability of the 10GBase-T receiver imposes more stringent balance requirements on all components in the signal path, such as cables, connectors, magnetics and printed circuit boards.
The increase in bits per symbol requires an increase in the number of signaling levels. As the peak transmit voltages are roughly the same between Gigabit Ethernet and 10GBase-T and the link attenuation is higher at the higher frequencies used by 10GBase-T, the minimum signal level spacing is lower in the 10GBase-T receiver. The minimum spacing between received levels in a 100-meter link is about 5mV (the equivalent number for Gigabit Ethernet is about 80mV).
An external EM interfering signal that couples to the cable common mode and gets converted to a differential signal with amplitude greater than 3mV will cause errors on a 100-meter 10GBase-T link.
Are the EM fields that can generate this level of interference common? The CISPR-24 standard (defined by the Comité International Spécial des Perturbations Radioélectriques) calls for testing with field strengths of 3V/m and GR1089 (Belcore) calls for testing with field strengths of 8.5V/m. Measurements using Cat5 and Cat6 unshielded twisted pair cabling in 8.5V/m EM fields indicate that differential pickup can easily reach 60mV, thereby exceeding the voltage margin at the receiver of a 100-meter 10GBase-T link. So what are the solutions?
Improving immunity to EMI
Any discussion on improving EMI immunity must consider shielding. Shielding the cable reduces the EMI signal showing up at the receiver and we have seen that using screened Cat6 can lower the differential pickup by a factor of ten.
Dual shielding, as in Cat7 (with an individual shield around each twisted pair and another shield around the whole cable), can provide an even greater reduction in differential pickup. Screening/shielding provide a clear benefit to EMI immunity and several cabling manufacturers are recommending the use of screened or shielded cable for 10GBase-T.
For shielding to be effective, every component in the link (patch cord, connectors, patch panels and horizontal link segment) has to be shielded and the shields of each component have to be tied together completely -- not just connected at a single point. Maintaining the integrity of the shield is not easy, and this, along with cost reasons, have restricted screened/shielded cabling to a very small percentage of the total number of data center links in use.
How can EMI immunity be improved in the absence of adequate shielding or with the continued use of UTP cable? A basic step is to keep the differential signal lines (and any components in the path) well balanced so that common mode to differential conversion is minimized. Beyond that, we have to rely on signal processing techniques to adaptively identify and remove the interfering signal.
Adaptive interference cancellation for handling external EMI requires three key steps: detecting the interferer, identifying it and then removing it.
Detection can be based on analysis of the differential signal available at each receiver. This detection must operate in the presence of the desired signal being sent from the transmitter on the other side, thus requiring sophisticated signal processing. A Fourier transform applied to the received differential signal can reveal an interferer that would otherwise go undetected because its voltage levels are lower than that of the transmitter signal.
The EMI signal can be detected with much higher confidence on the common mode signal but the transformers traditionally used for isolation in Base-T Ethernet links do not have a receiver for the common mode signal. The newer generation of 10GBase-T solutions offer a receiver dedicated to the common mode signal.
This receiver requires an extra transformer beyond what is in the traditional magnetics but can greatly improve the reliability of EMI detection. 10GBase-T PHYs from vendors such as PLX Technology are configurable for both modes of operation, allowing system makers to make their own tradeoff on robustness vs. cost.
Once the EMI signal is detected, its frequency and amplitude must be identified. With dedicated on-chip hardware for identification, this can be accomplished in 10msec or less. EMI detection and identification is an area where there will be a substantial spread in performance between different vendors of 10GBase-T products.
Once the EMI signal has been detected and identified, it can be filtered out using an adaptive filter. This filter can itself cause distortion of the desired signal and this distortion must be compensated. The compensation can be implemented using the "Fast Retrain" feature of the 10GBase-T standard, where a new set of DFE feedback coefficients is sent from the receiver to the transmitter. The compensation can also be implemented locally on the receiver using an additional adaptive filter.
The adoption of 10G Ethernet over twisted-pair cabling in the data center has had its challenges. After the initial concerns of high power consumption, new concerns arose about EMI susceptibility. Fortunately, the latest generation of 10GBase-T PHYs, employing both advanced manufacturing processes and innovation in design, are delivering dramatically improved EMI immunity and now a low-cost 10G connectivity solution without the severe reach constraints of SFP+ Direct Attach.
PLX Technology Inc. is a Sunnyvale, Calif.-based provider of semiconductor-based connectivity solutions for the enterprise and consumer markets. PLX develops innovative software-enriched silicon that enables product differentiation, reliable interoperability and superior performance.
Read more about lan and wan in Network World's LAN & WAN section.