The 160-mile download diet: Local file-sharing drastically cuts network load

Aug 19, 2008

(PhysOrg.com) -- Ever since Bram Cohen invented BitTorrent, Web traffic has never been the same. Whether that's a good thing or a bad thing, however, is a matter of debate.

Peer-to-peer networking, or P2P, has become the method of choice for sharing music and videos. While initially used to share pirated material, the system is now used by NBC, BBC and others to deliver legal video content and by Hollywood studios to distribute movies online. Experts estimate that peer-to-peer systems generate 50 to 80 percent of all Internet traffic. Most predict that number will keep going up.

Tensions remain, however, between users of bandwidth-hungry peer-to-peer users and struggling Internet service providers.

To ease this tension, researchers at the University of Washington and Yale University propose a neighborly approach to file swapping, sharing preferentially with nearby computers. This would allow peer-to-peer traffic to continue growing without clogging up the Internet's major arteries, and could provide a basis for the future of peer-to-peer systems. A paper on the new system, known as P4P, will be presented this week at the Association for Computing Machinery's Special Interest Group on Data Communications meeting in Seattle.

"Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW research assistant professor of computer science and engineering. "At the same time, speeds are increased by about 20 percent."

"We think we have one of the most extensible, rigorous architectures for making these applications run more efficiently," said co-author Richard Yang, an associate professor of computer science at Yale.

The project has attracted interest from companies. A working group formed last year to explore P4P and now includes more than 80 members, including representatives from all the major U.S. Internet service providers and many companies that supply content.

"The project seems to have a momentum of its own," Krishnamurthy said. The name P4P was chosen, he said, to convey the idea that this is a next-generation P2P system.

In typical Web traffic, the end points are fixed. For example, information travels from a server at Amazon.com to a computer screen in a Seattle home and the Internet service provider chooses how to route traffic between those two fixed end points. But with peer-to-peer file-sharing, many choices exist for the data source because thousands of users are simultaneously swapping pieces of a larger file. Right now the choice of P2P source is random: A college student in a dorm room would be as likely to download a piece of a file from someone in Japan as from a classmate down the hall.

"We realized that P2P networks were not taking advantage of the flexibility that exists," Yang said.

For the networks considered in the field tests, researchers calculated that the average peer-to-peer data packet currently travels 1,000 miles and takes 5.5 metro-hops, which are connections through major hubs. With the new system, data traveled 160 miles on average and, more importantly, made just 0.89 metro-hops, dramatically reducing Web traffic on arteries between cities where bottlenecks are most likely to occur.

Tests also showed that right now only 6 percent of file-sharing is done locally. With the tweaking provided by P4P algorithms, local file sharing increased almost tenfold, to 58 percent.

The P4P system requires Internet service providers to provide a number that acts as a weighting factor for network routing, so cooperation between the Internet service provider and the file-sharing host is necessary. But key to the system is that it does not force companies to disclose information about how they route Internet traffic.

Other authors of the paper are Haiyong Xie, a Yale graduate now working at Akamai Technologies Inc., Yanbin Liu, at IBM's Thomas J. Watson Research Center, and Avi Silberschatz, professor and chair of computer science at Yale. The UW research was supported by the National Science Foundation.

Provided by University of Washington

Explore further: Coping with floods—of water and data

add to favorites email to friend print save as pdf

Related Stories

Recommended for you

Coping with floods—of water and data

20 hours ago

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 3

Adjust slider to filter visible comments by rank

Display comments: newest first

Modernmystic
not rated yet Aug 19, 2008
I don't even have a P2P program on my computer, so this impacts me very little in the direct sense, however this could really indirectly help out the non P2Per in huge bandwith savings without degrading service for those that do P2P.

Sounds pretty win win.
ancible
not rated yet Aug 20, 2008
Is this likely to allow general (or perhaps even more specific) geographic tracking by authorities and others of those who use such systems?
pup
not rated yet Sep 08, 2008
hmm"But with peer-to-peer file-sharing, many choices exist for the data source because thousands of users are simultaneously swapping pieces of a larger file. Right now the choice of P2P source is random: A college student in a dorm room would be as likely to download a piece of a file from someone in Japan as from a classmate down the hall. ..."

it is NOT random and all these guys and companys are esently doing is adding in finer grained DHT routing ,they are spending all this money on taking the existing free java codebase of the likes of Azurius/Voze and addingin their extentions to try and take control of the flow
of users data, rather that do the right thing and just pay a few coders to improve the free codebase and add finer grained DHT routing.

we have been saying add this LAN, then WAN,then ISP-UBr, ISP router etc ,etc for a long time now but The ISPs didnt want to know as they want to exclusively control the flow and restrict your content.

if they really wanted to do the right thing they could be for werse than turn ON Multicasting all the way to the end users for free as its already existing in all thoer ISP routers and related kit, and retrofit Multicasting into the current AZ p2p DHT alongside that finer grained routing i post above.

if you already run 2 or more AZ/Vuse apps on your local (wireless)LAN collecting the same torrent, AZ already knows to use that LAn connection before it also uses the WAN to your ISP to transfer the file so this p4p is NOTHING NEW, and mearly a side line to the existing free code.

dont let this promise od something new distract you from asking for real Multicast P2p or at least taking the time to write a Multicast tunnel and retrofitting MC DHT to the existing codebase to bypass the ISPs refusal to active real Multicasting to your end user desk kit.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.