How the Internet architecture got its hourglass shape and what that means for the future

Aug 15, 2011
This illustration of the hourglass Internet architecture shows the six layers, from top to bottom: specific applications, application protocols, transport protocols, network protocols, data-link protocols and physical layer protocols. Credit: (Credit: Constantine Dovrolis)

In the natural world, species that share the same ecosystem often compete for resources, resulting in the extinction of weaker competitors. A new computer model that describes the evolution of the Internet's architecture suggests something similar has happened among the layers of protocols that have survived -- and become extinct -- on the worldwide network.

Understanding this evolutionary process may help as they develop protocols to help the Internet accommodate new uses and protect it from a wide range of threats. But the model suggests that unless the new Internet avoids such competition, it will evolve an hourglass shape much like today's Internet.

"To avoid the ossification effects we experience today in the network and transport layers of the Internet, architects of the need to increase the number of protocols in these middle layers, rather than just push these one- or two-protocol layers to a higher level in the architecture," said Constantine Dovrolis, an associate professor in the School of Computer Science at the Georgia Institute of Technology.

The research will be presented on August 17, 2011 at SIGCOMM, the annual conference of the Special Interest Group on , a special interest group of the Association for Computing Machinery. This research was supported by the National Science Foundation.

From top to bottom, the consists of six layers:

• Specific applications, such as Firefox;
• Application protocols, such as Hypertext Transfer Protocol (HTTP);
• Transport protocols, such as Transmission Control Protocol (TCP);
• Network protocols, such as Internet Protocol (IP);
• Data-link protocols, such as Ethernet; and
• Physical layer protocols, such as DSL.

Layers near the top and bottom contain many items, called protocols, while the middle layers do not. The central transport layer contains two protocols and the network layer contains only one, creating an hourglass architecture.

Dovrolis and graduate student Saamer Akhshabi created an evolutionary model called EvoArch to study the emergence of the Internet's hourglass structure. In the model, the architecture of the network changed with time as new protocols were created at different layers and existing protocols were removed as a result of competition with other protocols in the same layer.

Illustration showing the number and age of protocols in each layer of the Internet architecture. In the middle layers, there are only a few protocols that are old and conserved. Credit: (Credit: Constantine Dovrolis)

EvoArch showed that even if future Internet architectures are not built in the shape of an hourglass initially, they will probably acquire that shape as they evolve. Through their simulations, Dovrolis and Akhshabi found that while the accuracy of the structure improved with time, the basic hourglass shape was always formed -- no matter what shape it started in.

"Even though EvoArch does not capture many practical aspects and protocol-specific or layer-specific details of the Internet architecture, the few parameters it is based on -- the generality of protocols at different layers, the competition between protocols at the same layer, and how new protocols are created -- reproduced the observed hourglass structure and provided for a robust model," said Dovrolis.

The model revealed a plausible explanation for the Internet's hourglass shape. At the top, protocols are so specialized and selective in what underlying building blocks they use that they rarely compete with each other. When there is very little competition, the probability of extinction for a protocol is close to zero.

"In the top layers of the Internet, many new applications and application-specific protocols are created over time, but few things die, causing the top of the hourglass to get wider over time," said Dovrolis.

In the higher layers, a new protocol can compete and replace an incumbent only if they provide very similar services. For example, services provided by the File Transfer Protocol (FTP) and HTTP overlapped in the application-specific layer. When HTTP became more valuable because of its own higher layer products -- applications such as web browsers -- FTP became extinct.

At the bottom, each protocol serves as a general building block and shares many products in the layer above. For example, the Ethernet protocol in the data-link layer uses the coaxial cable, twisted pair and optical fiber technologies in the physical layer. But because the bottom layer protocols are used in an abundant way, none of them dominate, leading to a low probability of extinction at layers close to the bottom.

The EvoArch model predicts the emergence of few powerful and old protocols in the middle layers, referred to as evolutionary kernels. The evolutionary kernels of the Internet architecture include IPv4 in the network layer, and TCP and the User Datagram Protocol (UDP) in the transport layer. These protocols provide a stable framework through which an always-expanding set of physical and data-link layer protocols, as well as new applications and services at the higher layers, can interoperate and grow. At the same time, however, those three kernel protocols have been difficult to replace, or even modify significantly.

To ensure more diversity in the middle layers, EvoArch suggests designing protocols that are largely non-overlapping in terms of services and functionality so that they do not compete with each other. The model suggests that protocols overlapping more than 70 percent of their functions start competing with each other.

When the researchers extended the EvoArch model to include a protocol quality factor -- which can capture protocol performance, extent of deployment, reliability or security -- the network grew at a slower pace, but continued to exhibit an hourglass shape. In contrast to the basic model, the quality factor affected the competition in the bottom layers and only high-quality protocols survived there. The model also showed that the kernel protocols in the waist of the hourglass were not necessarily the highest-quality protocols.

"It is not true that the best protocols always win the competition," noted Dovrolis. "Often, the kernels of the architecture are lower-quality protocols that were created early and with just the right set of connections."

Researchers are also using the EvoArch model to explore the emergence of hourglass architectures in other areas, such as metabolic and gene regulatory networks, the organization of the innate immune system, and in gene expression during development.

"I believe there are similarities between the evolution of Internet protocol stacks and the evolution of some biological, technological and social systems, and we are currently using EvoArch to explore these other hourglass structures," said Dovrolis.

Explore further: Big data may be fashion industry's next must-have accessory

Related Stories

Automated analysis of security-sensitive protocols

Oct 25, 2005

The sheer number and variety of security protocols for Internet applications under development makes it difficult to be sure that any one protocol is 100 per cent secure from attack. Now an automated tool can systematically ...

NIST issues draft IPv6 technical profile

Feb 01, 2007

The National Institute of Standards and Technology (NIST) yesterday issued a draft profile that will assist federal agencies in developing plans to acquire and deploy products that implement Internet Protocol version 6 (IPv6). ...

Internet body meets on domain names, IP addresses

Dec 06, 2010

ICANN, the international regulatory body for Web architecture, met here Monday to discuss expanding the list of top level domain names and a new generation of Internet protocol addresses.

Recommended for you

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

Teaching robots to see

Dec 15, 2014

Syed Saud Naqvi, a PhD student from Pakistan, is working on an algorithm to help computer programmes and robots to view static images in a way that is closer to how humans see.

User comments : 14

Adjust slider to filter visible comments by rank

Display comments: newest first

shockr
3.8 / 5 (4) Aug 15, 2011
What a load of crap. For starters, doesn't the OSI model have 7 layers? And why have so many important protocols been missed out? IPv6 would make the middle section less 'hourglassy'.

Are taxpayers really giving Constantine Dovrolis money to do pointless research?
Royale
1 / 5 (1) Aug 15, 2011
Shockr, you do realize that IPv6 is currently about as useful as FireWire, right? Just because they create a thoughtful, new, protocol, doesn't mean the thing is going to take off. Look around, everyone is still using IPv4. Sure there are some mixes out there, but if you're talking about over the internet, it's almost all IPv4...
Like they say in the article: ["It is not true that the best protocols always win the competition," noted Dovrolis. "Often, the kernels of the architecture are lower-quality protocols that were created early and with just the right set of connections."]
In this case IPv4 is the lower-quality protocol. It's just so used that it's hard to get people to move away from it...
Eikka
not rated yet Aug 15, 2011
The reason why there can't be many different protocols in the middle layers is because everybody wants to talk to everybody in a common language.

What they're proposing is the equivalent of everyone in the world simultaneously using the metric system AND the US customary units, but in a non-competing way so that some things are measured in meters and other things are measured in feet, while either one would be suitable for all of those tasks.

What's the point?
Temple
not rated yet Aug 15, 2011
What they're proposing is the equivalent of everyone in the world simultaneously using the metric system AND the US customary units, but in a non-competing way so that some things are measured in meters and other things are measured in feet, while either one would be suitable for all of those tasks.


I like your analogy, but not your conclusion. I think your analogy makes the opposite point to the one you were trying to make.

We are not talking about humans needing to be fluent in both systems, we are talking about computers.

Computers and algorithms can very easily be programmed to detect which 'system' is being used, be it metric or imperial, IPv4 or IPv6, and adapt accordingly.

The point is that with only one tech in play, it has a tendency to stagnate. If multiple technologies compete with one another on a level playing field, we see which 'characteristics' work and which do not. That has a tendency to spur innovation.
hush1
not rated yet Aug 15, 2011
WAP is synonymous with the top brim of the hourglass?
Eikka
not rated yet Aug 16, 2011

Computers and algorithms can very easily be programmed to detect which 'system' is being used, be it metric or imperial, IPv4 or IPv6, and adapt accordingly.


The issue is that both are in use simultaneously. IPv6 gives you access to IPv6 adress space, and IPv4 gets you to the parts of the internet that work with IPv4.

It's virtually two different internets that need adress translation and special routing to get from one to the other - otherwise you're stranded - and that creates unnecessary extra work and is open to security problems, bugs, routing issues.

Like failing to convert from yards to meters and having your space probe slam to the ground, or refueling your airplane in pounds and entering the amount of fuel to the computer in kilograms.

And somebody's got to maintain the whole monster and set up the routing, and then troubleshoot it when something goes wrong.
frajo
not rated yet Aug 16, 2011
When HTTP became more valuable because of its own higher layer products -- applications such as web browsers -- FTP became extinct.

Which planet are they reporting from?
antialias_physorg
5 / 5 (1) Aug 16, 2011
Which planet are they reporting from?

Probably a planet where internet is just for fun. Those of us who actually do some work that requires fast conections and tons of (sensitive) data know that FTP is alive and well.

Computers and algorithms can very easily be programmed to detect which 'system' is being used, be it metric or imperial, IPv4 or IPv6, and adapt accordingly.

problem is that there is a lot of net-conected hardwae out there where the embeeded chips/software don't know anything about IPv6 - and where it's not easy (or sometimes even impossible) to upgrade.
frajo
not rated yet Aug 16, 2011
In this case IPv4 is the lower-quality protocol. It's just so used that it's hard to get people to move away from it...

People don't go for IPv4 or IPv6 because they don't know what it is. They won't notice when IPv6 is being used on their box.
But the OEMs and system builders will have no choice since the IPv4 address space is sold already.
CHollman82
3 / 5 (2) Aug 16, 2011
KISS... Keep it simple, stupid.

The "hourglass" shape that they are referring to is obvious... I can't believe they spent time making a predictive model about this... What a waste.
YouAreRight
5 / 5 (1) Aug 16, 2011
Too me the hour glass shape makes perfect sense and should stay that way for many years to come.

There will always be a multitude of physical network technologies depending on your requirements like wireless, fibre or copper. They all require different protocols matching their physical properties.

On the other end of the OSI model there need to be a multitude of application layer protocols to match a specific applications needs.

It makes sense that application layer protocols out number physical protocols because application layer protocols can be created without any limitations. Unlike physical layer protocols that are restricted by the physical medium they are associated with.

A 1000ft view of the layers might be.

Hardware(G.992.5,802.3ab...) <-> Abstraction(IPV4/6,TCP...) <-> Software(HTTP,SMTP...)

The 'Abstraction' layer needs to be the smallest if it is to provide a common framework to communicate any 'Software' protocol over any type of physical medium.

....
YouAreRight
not rated yet Aug 16, 2011
... An interesting approach equating biological processes to computer networking, but I fail to see any useful comparisons.
Maybe I'm to dumb to understand such an abstract concept.
Royale
not rated yet Aug 17, 2011
YouAreRight, you are right. I don't think anyone is too dumb to grasp the concept, just not in the straightforward way you would think. They basically talk about the comparison with computer networking; really meaning self-building self-maintaining computer networks, an area of huge study now.
shockr
not rated yet Aug 19, 2011
@CHollman82

My point exactly. I've been using computers since 1985, I've seen protocols come and go. I know how this all works. My point was, are people paying for someone to get bored and start comparing protocol structures to biological systems?

Look more like she's cherry-picked her results to conform to this 'hourglass'. I use FTP all the time (hat-tip to Antialias).

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.