Much ink has been spilled on the topic of what constitutes true “line rate,” and in the past we’ve advocated offering traffic at, and only at, 100.00 percent of theoretical line rate to determine if frame loss exists.
However, the distinction between 99.99 percent (which we used in these tests) and 100.00 percent load is not all that meaningful, especially at higher Ethernet speeds, for a couple of reasons. First, Ethernet is inherently an asynchronous technology, meaning each device (in this case, the device under test and the test instrument) uses one or more of its own free-running clocks, without synchronization. Thus, throughput measurements may just be artifacts of minor differences in the speeds of clock chips, not descriptions of a system’s fabric capacity.
Second, and we’re being hyper-literal here, an offered load of 99.99 percent of line rate technically is line rate. The IEEE’s 802.3 specification requires a clocking tolerance of +/- 100 parts per million (ppm) in all Ethernet interfaces – and 99.99 percent load happens to equate to exactly -100 ppm. It’s at the lowest end of the “line rate” spectrum, but it’s still line rate.
To deal with asynchronous clocking with differences between test tools and devices under test, the IETF is considering a new draft methodology for data center testing that specifies a maximum offered load of 99.98 percent. We don’t think the difference between 99.98 and 100.00 percent load is meaningful, and in any event our tests involved higher loads than the new recommended maximum.