In the ultra-geek world of machine-to-machine trading, the New York Stock Exchange is now measuring some of its processes in nanoseconds, or one billionth of a second.
The move is part of an unrelenting push by traders to maximize transaction times, not only for the raw competitive advantage in trading but to also increase the efficiency of processes.
But this drive is not just about keeping traders happy -- it's also about reducing the need for more hardware and data center space by maximizing use out of existing systems.
And it's that push to squeeze every last bit out of hardware that is pushing interest in ever more precise time measurements.
For years, milliseconds (one thousandth of second), was the standard. "Now were getting down to single digit microseconds (one millionth of a second) across the network, and on the same machine we're getting into the nanosecond range," said Conor Allen, head of research and development, and core engineering at New York Stock Exchange Technologies, the exchange's technology arm. "It's an arms race," he said.
To track time on a nanosecond scale, NYSE will measure the elapsed time between a CPU's clock ticks. A 3GHz processor, for instance, may have a clock tick every 3.5 nanoseconds, said Allen. The processes measured in nanoseconds are occurring on the same machine.
The interest in measuring in nanoseconds isn't academic for the exchange's clients, said Allen. There are hedge funds that want to know how far they are from the trading system's matching engine because the longer the cable, the longer the delay. Every meter of cable length is about three nanoseconds -- the speed of a signal through copper, he said.
But the value of measuring in nanoseconds may spur debate.
Michael Salsburg, a distinguished engineer at Unisys Corp. who has been long involved with the Computer Measurement Group, said there are number of processes inside a CPU that could moot any nanosecond gains.
For instance, where multiple core processors are sharing the same cache, 'each one could be stealing cache from the other, which then causes more CPU cycles." These CPU processes could easily drive times up to milliseconds, he said.
What may not be up for debate is interest in driving out latency in transactions. NYSE Technologies and Voltaire Ltd. have worked to develop a technology that reduces latency for 10GbE, to almost the same level of performance of an InfiniBand environment.
That has been accomplished in part by using RDMA (Remote Direct Memory Access), which bypasses the need to make multiple copies of data in memory. This and other steps helps increase the ability to process messages faster and at higher volumes. One benefit is increased efficiency of the hardware, which reduces the need for more servers, and, ultimately, data center space.
Where latency variability might have been 100 to 200 microseconds, it is now 20-25 microseconds. "The fact that I am able to consume millions of messages per second into a single application is a critical enabler to allow people to do much more with the hardware that they have," said Allen. This is especially important because message volumes are more than doubling year-to-year.
Jeff Boles, an analyst at The Taneja Group, said that 10GbE is getting close to InfiniBand in capability. The work to reduce latency is "about making 10GbE more useful."
He can see broad adoption of 10GbE coming, but it may take a while. "You are not going to forklift your entire 1GbE infrastructure because 10GbE came along but eventually you will be using it in more and more places," Boles said.
With the push seemingly endless, the question becomes: what follows nanoseconds? Picoseconds, which is one trillionth of a second. To measure something at that range, you might need a 10GHz chip, said Allen.