Internet research to level the playing field

Aug 03, 2012 by Norunn K. Torheim/Else Lie
A stock exchange’s trading system requires rapid response (Photo: Shutterstock)

Short delays on the Internet can have serious consequences for share-traders or players of online computer games. Norwegian ICT researchers intend to do something about it.

The as we know it today has been optimised to transmit large amounts of data or “greedy streams” - the type of transmission involved in downloading large files or watching online TV.

“Up to now, Internet research has primarily focused on speeding up transmission by increasing bandwidth so that more data can be transferred at a given time,” explains Andreas Petlund of Simula Research Laboratory in Oslo.

The most common Internet protocol for transmitting data, TCP, works by apportioning available bandwidth among the users present at any given time. The downside is that this can cause , or delay, in data transmissions.
For time-dependant applications such as Internet telephony and online gaming, time lags as short as a few hundred milliseconds can create big problems.

Aiming to reduce latency

“In real-time gaming against other players online, data is transmitted only when an action such as moving around or shooting at someone is performed. The same principle applies for stock market programs when placing orders or requesting share prices, for example, via the trading systems in use by Oslo Børs, the Norwegian Stock Exchange. In such cases it is essential to avoid any delay,” says Dr Petlund.

Applications like these often generate what are called thin data streams. With thin streams only small amounts of data are transmitted at a time and there can be extended periods between data packages. (See Facts about data packages and network latency below)

According to Andreas Petlund, thin streams cannot compete with greedy traffic for bandwidth. Thin streams almost invariably come up short against greedy traffic and users are left to cope with the resulting lag.

As part of a new research project funded under the Research Council of Norway’s large-scale programme on Core Competence and Value Creation in ICT (VERDIKT), researchers are working to reduce latency as much as possible.

“We want a more balanced Internet where thin streams don’t always lose out. This can be achieved by adding speed to the mix, instead of only thinking about maximising throughput,” says Dr Petlund.

New approaches

Network researchers are now planning to use simulation and modelling to learn more about the network behaviour of thin data streams. According to Dr Petlund, neither this nor the behaviour of data streams in competition with other traffic has ever been studied in depth.

The primary obstacle lies in the vast complexity of the systems making up the Internet. “We may thoroughly understand each individual mechanism or sub-protocol under controlled conditions, but in the Internet jungle it is rather like putting something into a black box without knowing what’s going to come out the other end,” he explains.

“This happens because the Internet is a shared resource and we have no control over what everyone else is using it for.”

International cooperation

One of the partners the Norwegian researchers will be working with is Dr Jens Schmitt of the University of Kaiserslautern. Dr Schmitt is working on the development of mathematical models of network behaviour and testing the extent to which the models provide a good picture of reality.

“We also have some researchers from the US on the team,” Dr Petlund adds. “In collaboration with the Cooperative Association for Internet Data Analysis (CAIDA) in San Diego, a leader in the field of Internet analysis, we are going to perform measurements and analyses to find out what percentage of all data streams are thin streams. No such data exists anywhere today.”

Pushing for standardisation

Researchers are also employing more traditional research methods in order to study how thin streams behave both in test networks in the laboratory and when they are transmitted via the Internet.

One desired outcome is a standardised mechanism for handling thin data streams through the Internet Engineering Task Force (IETF).

“We won’t be able to establish a standard unless we can prove that one is really needed. That is why we first need to measure the prevalence of thin streams,” says Dr Petlund.

It is also essential to find out if prioritising thin data streams on the Internet has any negative consequences on other traffic. If this turns out to be the case, then the current use of so many different transmission technologies will pose a formidable challenge.

“At one time everyone connected to the Internet by means of a cable. Now we have a wide array of alternatives such as WiFi, 3G, 4G, WiMax, ADSL and fibre-optic connections – all of which behave differently. We must come up with solutions that are optimal for everyone,” Andreas Petlund affirms.

Better online computer games

It was an interest in computer games that originally inspired researchers at Simula to study systems supporting time-dependent applications ahead of most of the rest of the field.

Andreas Petlund has previously worked on improvements at the operating system level to decrease latency arising from package loss. Users of Linux are benefiting from the resulting technology.

The large Norwegian company, Funcom, has integrated these improvements into a number of their games servers. The technology has been tested on their highest-profile game, Age of Conan, and will be used for The Secret World, soon to be released.

Facts about data packages and network latency

In order to transmit large amounts of data over the Internet as efficiently as possible, a steadily increasing amount of data is sent until maximum bandwidth capacity is reached. The amount of data then stabilises so that bandwidth usage is optimised.

The Internet transmission protocol most used today, TCP, works by dividing data into packages. Queuing systems are used to transmit data packages. All data streams destined to go between given nodes in the Internet can be found in these queues.

If the queues fill up entire data packages are removed from the queue. These packages are then lost.

In order to determine which packages have actually arrived at the destination, the originator requests delivery confirmation for each package. If too much time elapses before a response is received the package is transmitted anew, resulting in network lag.

Explore further: Time Warner Cable says outages largely resolved (Update)

add to favorites email to friend print save as pdf

Related Stories

The family that plays together stays together?

Apr 21, 2011

(PhysOrg.com) -- “Get off the computer and go play outside.” So go the words heard in homes around the country as parents and children clash over the social benefits of video games.

Internet study links usage case with basket case

Jun 18, 2012

(Phys.org) -- You may have had to think about the Internet as a place where your Web usage puts a pricetag on your head and you are treated with ads assuming you are a likely customer. Now you need to think ...

Judder-free videos on the smartphone

Feb 03, 2012

Overloaded cellular networks can get annoying – especially when you want to watch a video on your smartphone. An optimised Radio Resource Manager will soon be able to help network operators accommodate heavy network ...

Email link to boys' popularity

Oct 14, 2011

Surveyed boys who used email at home were brighter and more popular than boys who did not – according to a recent study by an educational psychologist from Curtin University.

Recommended for you

User comments : 1

Adjust slider to filter visible comments by rank

Display comments: newest first

Eikka
5 / 5 (1) Aug 03, 2012
TCP/IP is one source of the problem, because in each "tier" of the internet (http://www.telco2...tz.png), the network can only have so many routes, since every router at any level of hierarchy must know all paths to all exits from the network. Adding more exit points (subnets or individual machines) means that the routing table grows exponentially until it becomes too large and too slow to use.

To combat that, there are several techniques to lump IP adresses into groups that get served as a single route, but that means you have to delegate further routing to sub-networks, and they too have the same problem. So as the number of internet users and internet enabled devices grows, the branches of the entire network grow longer and the latency increases as the number of routing hops from one IP to another increase.

Adding more possible adresses by changing from IPv4 to IPv6 doesn't help - you need to use a different principle of routing traffic than TCP/IP.