Millionths of a second can cost millions of dollars: A new way to track network delays

Aug 20, 2009
Computer scientists have developed an inexpensive solution for diagnosing networking delays in data center networks as short as tens of millionths of seconds -- delays that can lead to multi-million dollar losses for investment banks running automatic stock trading systems. Similar delays can delay parallel processing in high performance cluster computing applications run by Fortune 500 companies and universities. University of California, San Diego, and Purdue University computer scientists presented this work on August 20, 2009, at SIGCOMM, the premier networking conference. Credit: UC San Diego / Purdue University

(PhysOrg.com) -- Computer scientists have developed an inexpensive solution for diagnosing networking delays in data center networks as short as tens of millionths of seconds—delays that can lead to multi-million dollar losses for investment banks running automatic stock trading systems. Similar delays can delay parallel processing in high performance cluster computing applications run by Fortune 500 companies and universities.

University of California, San Diego and Purdue University computer scientists presented this work on August 20, 2009 at SIGCOMM, the premier networking conference.

The new approach offers the possibility of diagnosing fine-grained delays—down to tens to microseconds—and packet loss as infrequent as one in a million at every router within a data center network. (One microsecond is one millionth of a second.) The solution could be implemented in today's router designs with almost zero cost in terms of router hardware and with no performance penalty. The University of California, San Diego and Purdue University computer scientists call their invention the Lossy Difference Aggregator.

"This is stuff the big traders will be interested in," said George Varghese, a professor at the UC San Diego Jacobs School of Engineering and an author on the SIGCOMM paper, "but more importantly, the router vendors for whom such trading markets are an important vertical."

If an investment bank's algorithmic stock trading program reacts to information on cheap stocks from an incoming market data feed just 100 microseconds earlier than the competition, it can buy millions of shares and bid up the price of the stock before its competitors' programs can react, the computer scientists say.

While the network links between Wall Street and investment banks' data centers are short, optimized and well monitored, the performance of the routers within the data centers that run automated stock trading systems are difficult and expensive to monitor. Delays in these routers, also known as latencies, can add 100s of microseconds, potentially leading to millions of dollars in lost opportunities.

"Every investment banking firm knows the importance of microsecond network delays. Because routers today aren't capable of tracking delays through them at microsecond time scales, exchanges such as the London Stock Exchange use specially crafted external boxes to track delays at various key points in the data center network," said Alex Snoeren, a computer science professor at the UC San Diego Jacobs School of Engineering and an author on the SIGCOMM paper.

But these external systems are generally too large and expensive to be added to every router in a data center network running an automated stock trading system. This makes it difficult for the network managers to identify and locate problematic routers before they cost the company large amounts of money, the computer scientists say.

"Our hope is that this approach will allow router vendors to add fine scale delay and loss tracking, at almost zero cost to router performance, perhaps obviating the desire for expensive external network monitoring boxes at every router," said Ramana Kompella, the first author on the SIGCOMM paper and a computer science professor at Purdue University. Kompella earned his Ph.D. in computer science at UC San Diego in 2007.

The SIGCOMM 2009 paper presents simulations and proof-of-concept code for measuring latencies down to tens of microseconds and losses that occur once every million packets.

"When it comes to fault isolation, networks are a big black box. You put packets in on one side and you get them out the other side," explained SIGCOMM 2009 paper author Kirill Levchenko, a UC San Diego post-doctoral researcher who recently earned his Ph.D. in computer science at UC San Diego. "A lightweight network monitoring approach such as ours allows you to pinpoint the source of the performance degradation and identify the problem routers." Credit: UC San Diego / Daniel Kane

"The next step would be to build the hardware implementation, we are looking into that," said Kompella, who plans to continue pioneering research in fault diagnosis at Purdue.

This work highlights a fundamental shift happening across the Internet. As computer programs—rather than humans—increasingly respond to streams of information moving across computer networks in real time, millionths of seconds matter. Algorithmic stock trading systems are just one example. Extra microseconds of delay can also mean slower response times across clustered-computing platforms, which can slow down computation-intensive research, such as drug discovery projects.

"When it comes to fault isolation, networks are a big black box. You put packets in on one side and you get them out the other side," explained SIGCOMM paper author Kirill Levchenko, a UC San Diego post-doctoral researcher who recently earned his Ph.D. in computer science at UC San Diego. "A lightweight network monitoring approach such as ours allows you to pinpoint the source of the performance degradation and identify the problem routers."

Lossy Difference Aggregator

Simple counters and clever thinking are at the heart of the Lossy Difference Aggregator.

The classical way to measure latency is to track when a packet arrives and leaves at a router, take the difference of these times, and average over all packets that arrive over a fixed time period, such as one second. However, a typical router may process 50 million packets in a second, and keeping track of each packet's arrival and departure is a daunting piece of bookkeeping. It may seem that a simple approach is to sum all the arrival times in one counter, sum all the departure times in another counter, subtract the two counters and divide by number of packets. Unfortunately, this simple "aggregation" idea fails when a packet is lost within a router (which commonly happens). In that case, the lost packet arrival time is included but its departure time is not, throwing the whole estimate wildly out of whack.

Instead of summing the arrival and departure times of all packets traveling through a router, the computer scientists' system randomly splits incoming packets into groups and then adds up arrival and departure times of each of the groups separately. As long as the number of losses is smaller than the number of groups, at least one group will give a good estimate.

Subtracting these two sums (from the groups that have no loss) and dividing by the number of messages provides an estimate of the average delay with very little overhead—just a series of lightweight counters.

With this invention built into every router, a data center manager should be able to quickly pinpoint the offending router and interface that is adding extra microseconds of delay or losing even a few packets in a million, explained Levchenko.

"This is diagnostic tool, a potentially extremely important one. You don't want to just know that you have a network problem, you want to know which router and which application is causing the problem," said Snoeren.

The network manager can then upgrade the or link, or reassign an offending application that is sending message bursts to another processing path.

By contrast, today's routers can be made to log messages; but looking through logs of millions of messages to pinpoint delay problems is like looking for a needle in a haystack.

"If implemented, this kind of approach should enable investment bankers to turn their attention to tuning their algorithmic trading programs to make more intelligent investments, instead of worrying about delays through obscure routers," said Varghese.

More information: "Every Microsecond Counts: Tracking Fine-Grain Latencies with a Lossy Difference Aggregator," by Ramana Kompella from Purdue University, and Kirill Levchenko, Alex C. Snoeren, and George Varghese from the University of California, San Diego.
Download a copy of the paper at: www-cse.ucsd.edu/~snoeren/papers/lda-sigcomm09.pdf

Source: University of California - San Diego (news : web)

Explore further: Vatican's manuscripts digital archive now available online

add to favorites email to friend print save as pdf

Related Stories

Digital Dandelions

Aug 31, 2007

What looks like the head of a digital dandelion is a map of the Internet generated by new algorithms from computer scientists at UC San Diego. This map features Internet nodes – the red dots – and linkages ...

Netgear Launches A New Family Of Wireless-N Routers

Sep 29, 2008

Netgear today has announced a new family of Wireless-N networking solutions that will make it easy for anyone to upgrade their wireless home network to Wireless-N technology. This new technology supports the ...

Controlling bandwidth in the clouds

Aug 29, 2007

If half your company’s bandwidth is allocated to your mirror in New York, and it’s the middle of the night there, and your sites in London and Tokyo are slammed, that New York bandwidth is going to waste. ...

Recommended for you

Facebook sues law firms, claims fraud

9 hours ago

Facebook is suing several law firms that represented a man who claimed he owned half of the social network and was entitled to billions of dollars from the company and its CEO Mark Zuckerberg.

IBM 3Q disappoints as it sheds 'empty calories'

10 hours ago

IBM disappointed investors Monday, reporting weak revenue growth again and a big charge to shed its costly chipmaking division as the tech giant tries to steer its business toward cloud computing and social-mobile ...

User comments : 5

Adjust slider to filter visible comments by rank

Display comments: newest first

Soylent
3 / 5 (2) Aug 21, 2009
No, that's the exact opposite of what we need. We need a 1-2 minute randomized delay on all trades, no exceptions.

There's no lack of liquidity, there's no need to facilitate this. At best it is gambling, at worst it enables front running and other tricks that allows these crooks to steal a handful of cents on every trade. These bastards already looted enough money from the tax payer(e.g. the stealth bail-out of Goldman via AIG), they don't need any further priviledges, they need to be prosecuted.
eurekalogic
not rated yet Aug 21, 2009
One more way the big kings of wall street can milk the middle class like cows. Way to go crew digging the poor deeper in the hole.
Mesafina
not rated yet Aug 21, 2009
IMHO the stock market is an inherently flawed system. Anyone who makes all their income from trading stock is essentially a paper-pusher, a parasite on our society who produces nothing. Sure you can say they fulfill the important role of capitalization, but there are other equally effective systems that could provide the availability and liquidity of capital needed that don't require the role of parasites who profit inordinately for a mostly negligible contribution to society.

People talk about the social welfare parasites, but these guys make them look tame by comparison.
El_Nose
not rated yet Aug 21, 2009
LOL -- look everyone who trades a stock uses algorithmic trading. So the guy making decision on your 401k is using it.. the guy who hold the money for your pension he's using it... the girl that is managing money for the nor for profit whatever is using it.

Most people have no idea who is using algorithmic trading-- the simple answer is every brokerage house uses it.

No some individuals are using it too, to try to make trades when they see a chance for profit... but that is not taking money away from anyone if anything it is making it slightly cheaper for anyone else to trade.
Soylent
not rated yet Aug 21, 2009
LOL -- look everyone who trades a stock uses algorithmic trading.


Irrelevant. What we're talking about here is the high frequency trading scam. A handful of investment banksters, like Goldman, get to to see orders before everyone else in the entire market, and they get to place orders far faster than everyone else in the entire market.

This has no benefits whatsoever and allows various kinds of cheating like issuing and cancel tiny orders within milliseconds to try and figure out how much slower traders are willing to pay.

The only safeguard against fraud is the honesty and moral decency of the investment banksters. After the rampant mortgage fraud, control fraud and fleecing of the tax payers their protestations to the contrary are worth less than zero.