An Internet 100 times as fast: A new network design could boost capacity

Jun 28, 2010 by Larry Hardesty
In today’s Internet, data traveling through optical fibers as beams of light have to be converted to electrical signals for processing. By dispensing with that conversion, a new network design could increase Internet speeds 100-fold.

(PhysOrg.com) -- The heart of the Internet is a network of high-capacity optical fibers that spans continents. But while optical signals transmit information much more efficiently than electrical signals, they?re harder to control. The routers that direct traffic on the Internet typically convert optical signals to electrical ones for processing, then convert them back for transmission, a process that consumes time and energy.

In recent years, however, a group of MIT researchers led by Vincent Chan, the Joan and Irwin Jacobs Professor of and Computer Science, has demonstrated a new way of organizing optical networks that, in most cases, would eliminate this inefficient conversion process. As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes.

One of the reasons that optical data transmission is so efficient is that different wavelengths of loaded with different information can travel over the same fiber. But problems arise when optical signals coming from different directions reach a router at the same time. Converting them to electrical signals allows the to store them in memory until it can get to them. The wait may be a matter of milliseconds, but there’s no cost-effective way to hold an still for even that short a time.

Chan’s approach, called “flow switching,” solves this problem in a different way. Between locations that exchange large volumes of data — say, Los Angeles and New York City — flow switching would establish a dedicated path across the network. For certain wavelengths of light, routers along that path would accept signals coming in from only one direction and send them off in only one direction. Since there’s no possibility of signals arriving from multiple directions, there’s never a need to store them in memory.

Reaction time

To some extent, something like this already happens in today’s Internet. A large Web company like Facebook or Google, for instance, might maintain huge banks of Web servers at a few different locations in the United States. The servers might exchange so much data that the company will simply lease a particular wavelength of light from one of the telecommunications companies that maintains the country’s fiber-optic networks. Across a designated pathway, no other Internet traffic can use that wavelength.

In this case, however, the allotment of bandwidth between the two endpoints is fixed. If for some reason the company’s servers aren’t exchanging much data, the bandwidth of the dedicated is being wasted. If the servers are exchanging a lot of data, they might exceed the capacity of the link.

In a flow-switching network, the allotment of bandwidth would change constantly. As traffic between New York and Los Angeles increased, new, dedicated wavelengths would be recruited to handle it; as the traffic tailed off, the wavelengths would be relinquished. Chan and his colleagues have developed network management protocols that can perform these reallocations in a matter of seconds.

In a series of papers published over a span of 20 years — the latest of which will be presented at the OptoElectronics and Communications Conference in Japan next month — they’ve also performed mathematical analyses of flow-switched networks’ capacity and reported the results of extensive computer simulations. They’ve even tried out their ideas on a small experimental optical network that runs along the Eastern Seaboard.

Their conclusion is that flow switching can easily increase the data rates of optical networks 100-fold and possibly 1,000-fold, with further improvements of the network management scheme. Their recent work has focused on the power savings that flow switching offers: In most applications of information technology, power can be traded for speed and vice versa, but the researchers are trying to quantify that relationship. Among other things, they’ve shown that even with a 100-fold increase in data rates, flow switching could still reduce the Internet’s power consumption.

Growing appetite

Ori Gerstel, a principal engineer at Cisco Systems, the largest manufacturer of network routing equipment, says that several other techniques for increasing the data rate of optical networks, with names like burst switching and optical packet switching, have been proposed, but that flow switching is “much more practical.” The chief obstacle to its adoption, he says, isn’t technical but economic. Implementing Chan’s scheme would mean replacing existing Internet routers with new ones that don’t have to convert optical signals to electrical signals. But, Gerstel says, it’s not clear that there’s currently enough demand for a faster Internet to warrant that expense. “Flow switching works fairly well for fairly large demand — if you have users who need a lot of bandwidth and want low delay through the network,” Gerstel says. “But most customers are not in that niche today.”

But Chan points to the explosion of the popularity of both Internet video and high-definition television in recent years. If those two trends converge — if people begin hungering for high-definition video feeds directly to their computers — flow switching may make financial sense. Chan points at the 30-inch computer monitor atop his desk in MIT’s Research Lab of Electronics. “High resolution at 120 frames per second,” he says: “That’s a lot of data.”

Explore further: Smartphones set out to decipher a cryptographic system

Related Stories

See-through networks

Mar 19, 2010

(PhysOrg.com) -- Promising faster, more efficient and cheaper computer networking, transparent networks are the paradigm of the future. But thanks to European researchers, they are on their way already.

Fiber Optical Transmission In Demand Of Higher Capacity

Apr 02, 2010

(PhysOrg.com) -- With the increasing high volume content over the internet, such as multimedia and high definition images, new transmission methods need to be found to handle the increasing data demand. Nippon ...

Optical Broadband Data Transmission in the Home

Mar 27, 2007

Siemens and Infineon have developed a simple broadband transmission system for use in home networks. The system uses optical polymer cables that can be laid and installed without requiring any special skills. ...

Data transport via fibre-optic network could be faster still

Dec 07, 2006

Due to the explosive growth in data transport the need for a greater utilisation of the bandwidth of fibre-optic networks is increasing. Dutch researcher Erwin Verdurmen examined how the transmission capacity of the glass ...

Recommended for you

Chameleon: Cloud computing for computer science

21 hours ago

Cloud computing has changed the way we work, the way we communicate online, even the way we relax at night with a movie. But even as "the cloud" starts to cross over into popular parlance, the full potential ...

Enabling a new future for cloud computing

Aug 21, 2014

The National Science Foundation (NSF) today announced two $10 million projects to create cloud computing testbeds—to be called "Chameleon" and "CloudLab"—that will enable the academic research community ...

User comments : 16

Adjust slider to filter visible comments by rank

Display comments: newest first

Topperfalkon
not rated yet Jun 28, 2010
Well, whilst it may not currently make a huge amount of sense on a national scale, I'd reckon it would on the international scale. Might be worth updating the equipment in those IXPs...
DaveGee
3.7 / 5 (3) Jun 28, 2010
"As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes."

Golly, that's certainly an informative statement... 100x or 1000x and hey since we aren't actually responsible for showing the maths... Lets just go all out and say a BAJILLION times or even 10 BAJAZILLION times faster.
El_Nose
4 / 5 (2) Jun 28, 2010
@DaveGee

Its understood that this is a latency issue they are trying to resolve.. And small examples were given in the article for the mathophiles to put it in perspective... average latency currently is in the millisecond range -- they are trying to reduce this to the microsecond range by not converting optical signals so going from say 20 milliseconds ( .020 seconds ) to say 10~200 microsecond range (.00001 seconds) easily a 100x increase in speed and quite possibly 1000x increase in speed.

Here we are saying faster = lag reduction - which while is a technically true statement is misleading by most people concept of saying speed increase.
Parsec
2 / 5 (1) Jun 28, 2010
@DaveGee

Its understood that this is a latency issue they are trying to resolve.. And small examples were given in the article for the mathophiles to put it in perspective... average latency currently is in the millisecond range -- they are trying to reduce this to the microsecond range by not converting optical signals so going from say 20 milliseconds ( .020 seconds ) to say 10~200 microsecond range (.00001 seconds) easily a 100x increase in speed and quite possibly 1000x increase in speed.

Here we are saying faster = lag reduction - which while is a technically true statement is misleading by most people concept of saying speed increase.

There are 2 ways to measure network speed, lag and bandwidth. Having very low lag won't increase the thruput except for very small amounts of data.
trekgeek1
5 / 5 (2) Jun 28, 2010
"...it's not clear that there's currently enough demand for a faster Internet to warrant that expense."

?????????????????

Really? Everyone I've ever spoken too about any internet related topic has complained of internet speeds being too slow, or wished that they had at least a 100Mb/s connection. And if people do feel that way, they need to realize that they will need faster internet connections for future online content. We'll probably do what we always do in this country; we'll wait until it's absolutely necessary and nearly too late instead of planning ahead and making it easy for ourselves. It's always "I don't need it now" instead of "I might need it later".
gunslingor1
4.3 / 5 (4) Jun 28, 2010
"As a result, it could make the Internet 100 or even 1,000 times faster while actually reducing the amount of energy it consumes."

Golly, that's certainly an informative statement... 100x or 1000x and hey since we aren't actually responsible for showing the maths... Lets just go all out and say a BAJILLION times or even 10 BAJAZILLION times faster.


I've studied this subject. 100-1000 times efficiency increase by eliminating conversion and switching elements is feasible, and may even be a low ball estimate.

If they figure out how to switch optical signals directly, you can expect to be able to watch 80 HD movies streaming over the internet at once, with little required buffering.

It'll happen eventually.

But I agree, physorg needs more math for god's sake.
ODesign
3.6 / 5 (5) Jun 28, 2010
I want faster internet. I live in San Francisco, and for myself and most of the people I talk to lag time has slowly gone up over the last year or two. Used to be when I entered google or phyorg into the url something came up right away. Nowdays, it lags a half second to 5 seconds and maybe I have to try twice because it times out the first time. I'm paying full price for a fast connection and videos and stuff go ok once they connect. It's just getting the connection thats not happening quickly enough.
a_n_k_u_r
2 / 5 (2) Jun 29, 2010
Gunslingor:
you can expect to be able to watch 80 HD movies streaming over the internet at once

Who wants to watch 80 movies at once? But I can imagine a couple of simultaneous video conferences in 3D HD going on in a household.
DaffyDuck
2.5 / 5 (2) Jun 29, 2010
My internet provider, Time Warner Cable, doesn't want it to be any faster (7MBps up / 256KBps down) because that might hurt their video business. Not sure how this will make any difference.
Robert_Dejournett
not rated yet Jun 29, 2010
I'm not sure what they are talking about increasing. If the bandwidth could be increased between two routers, that's very significant. If latency can be decreased, that's also very significant. One issue is that corporations like Cisco become wedded to the existing technology because it works and has a huge installed base. But if we want to increase the speed/latency without laying more fiber, we'll have to do something. So, this is potentially huge but I wonder what we're missing.

Also, in the internet backbone, latency/speed always need improvement. No such thing as 'fast enough'. Its more in terms of # of subscribers, not 'i can watch movies just fine right now'. Its more about, if we add 1 million more people, will the internet melt down?
gap
3 / 5 (1) Jun 29, 2010
So how is this improvement any different from noticing that circuit switched networks are more efficient for highly utilized paths than packet switched networks? The main improvement here seems to be recognizing a circuit switch opportunity (though it isn't clear that the opportunities aren't planned) and then having a clever way to do circuit switching on optical networks.
YashYogi
2 / 5 (1) Jun 30, 2010
South Korea has 100MPBS for homes.
Quantum_Conundrum
4.5 / 5 (2) Jul 03, 2010
The gaming industry is especially concerned with latency, and sadly the bottleneck is almost always in the ISP, not the gaming company or the gamer's computer.

When you are playing an RTS online and doing hundreds of actions per minute, you need that click or hotkey action to happen as close to instantaneously as possible.
===

As for the website thing, I believe that has more to do with the ever increasing complexity of the websites themselves, which are becoming more and more media oriented and more scripted and virtualized. Every time you visit a website now you are basically downloading an application and/or the output of another application.
rfc2616
not rated yet Jul 03, 2010
ODesign: I travel around the San Francisco area a lot, and have found that usually the initial handshake of a TCP connection is not often multiple seconds there, whereas I do often see it that slow overseas.

Many ISPs nameservers are under-provisioned for the amount of load they now handle. If you haven't tried this already, experiment with changing your computer's DNS servers directly to Google Public DNS at 8.8.8.8 and 8.8.4.4. If this yields a really significant improvement in the time it takes to connect to a host, complain to your ISP, invest in a 3rd party commercial DNS service, or switch to Google Public DNS. It certainly makes a big difference with my Verizon connection in Virginia.
vonrock
not rated yet Jul 05, 2010
This is all only good news, give me power and save me power too. let me slow down and enjoy my speed. Quantum Con. if your on a mac, click to flash.com takes the extra output out, speeds things up, saves power too FREE but its nice to donate.
nevdka
not rated yet Jul 05, 2010
Gunslingor:
you can expect to be able to watch 80 HD movies streaming over the internet at once

Who wants to watch 80 movies at once? But I can imagine a couple of simultaneous video conferences in 3D HD going on in a household.


I'd love to be able to watch 80 HD movies at once... Increased bandwidth will allow for higher bitrates, resolutions and framerates. quad HD video with a 4:4:4 colorspace at 60 fps will chew maybe 20 times the data of BD video. Then there's 3D that could double it 40x. Do we need it? No. But then, I thought DVD was awesomely clear when it came out.