339 Gbps: High-energy physicists smash records for network data transfer

Nov 23, 2012 by Allison Benter
High-Energy Physicists Smash Records for Network Data Transfer

(Phys.org)—Physicists led by the California Institute of Technology (Caltech) have smashed yet another series of records for data-transfer speed. The international team of high-energy physicists, computer scientists, and network engineers reached a transfer rate of 339 gigabits per second (Gbps)—equivalent to moving four million gigabytes (or one million full length movies) per day, nearly doubling last year's record. The team also reached a new record for a two-way transfer on a single link by sending data at 187 Gbps between Victoria, Canada, and Salt Lake City.

The achievements, the researchers say, pave the way for the next level of data-intensive science—in fields such as , astrophysics, genomics, meteorology, and global climate tracking. For example, last summer's discovery at the Large Hadron Collider (LHC) in Geneva of a new particle that may be the long-sought Higgs boson was made possible by a global of computational and data-storage facilities that transferred more than 100 petabytes (100 million gigabytes) of data in the past year alone. As the LHC continues to slam protons together at higher rates and with more energy, the experiments will produce an even larger flood of data—reaching the exabyte range (a billion gigabytes).

The researchers, led by Caltech, the University of Victoria, and the University of Michigan, together with Brookhaven National Lab, Vanderbilt University, and other partners, demonstrated their achievement at the SuperComputing 2012 (SC12) conference, November 12–16 in . They used wide-area network circuits connecting Caltech, the University of Victoria Computing Center in British Columbia, the University of Michigan, and the Salt Palace Convention Center in Utah. While setting the records, they also demonstrated other state-of-the-art methods such as software-defined intercontinental networks and direct interconnections between computer memories over the network between Pasadena and Salt Lake City.

"By sharing our methods and tools with scientists in many fields, we aim to further enable the next round of scientific discoveries, taking full advantage of 100-Gbps networks now, and higher-speed networks in the near future," says Harvey Newman, professor of physics at Caltech and the leader of the team. "In particular, we hope that these developments will afford physicists and students throughout the world the opportunity to participate directly in the 's next round of discoveries as they emerge."

As the demand for "Big Data" continues to grow exponentially—both in major science projects and in the world at large—the team says they look forward to next year's round of tests using network and data-storage technologies that are just beginning to emerge. Armed with these new technologies and methods, the Caltech team estimates that they may reach 1 terabit-per-second (a thousand gbps) data transfers over long-range networks by next fall. 

More information about the demonstration can be found at supercomputing.caltech.edu/.

Explore further: PsiKick's batteryless sensors poised for coming 'Internet of things'

Related Stories

New Internet2 Land-Speed Record: 6.63 Gigabits per Second

Sep 02, 2004

Scientists at the California Institute of Technology (Caltech) and the European Organization for Nuclear Research (CERN), along with colleagues at AMD, Cisco, Microsoft Research, Newisys, and S2io have set a new Internet2 land-speed reco ...

A Speed Record for Data Flow 6.25 gigabits per second

Apr 28, 2004

A land speed record for data flow, 6.25 gigabits per second (average rate) moving over an 11,000-km course from Los Angeles to Geneva, Switzerland, has been set a consortium of scientists form the CERN lab in Geneva and Caltech ...

Physicists Set New Record for Network Data Transfer

Dec 13, 2006

An international team of physicists, computer scientists, and network engineers led by the California Institute of Technology, CERN, and the University of Michigan and partners at the University of Florida and Vanderbilt, ...

Three DOE labs now connected with ultra-high speed network

Nov 14, 2011

The U.S. Department of Energy (DOE) is now supporting scientific research at unprecedented bandwidth speeds – at least ten times faster than commercial Internet providers – with a new network that connects thousands ...

Recommended for you

Large streams of data warn cars, banks and oil drillers

Apr 16, 2014

Better warning systems that alert motorists to a collision, make banks aware of the risk of losses on bad customers, and tell oil companies about potential problems with new drilling. This is the aim of AMIDST, the EU project ...

User comments : 8

Adjust slider to filter visible comments by rank

Display comments: newest first

alfie_null
not rated yet Nov 24, 2012
data-transfer speed

Rate.

Whenever I read about a record breaking data communication I wonder about what steps they would propose to handle the data at the end points. The rates far exceed the capacity of ordinary disks, backplanes/busses, and LANs.
charlesmiller000
1 / 5 (1) Nov 24, 2012
Bytes = bits divided by 8.
Therefore Gigabyte throughput should be 42.375GB/s, not 4GB/s.
Who does the math on these articles?
zz6549
not rated yet Nov 24, 2012
Interesting observation made by possible by Physorg's "Related articles" section.

In 2004 , the "Internet Land-Speed Record" was 6.63 Gb/s (http://phys.org/n...ml#nRlv)
cdt
5 / 5 (1) Nov 25, 2012
charlesmiller000, 42.375 GB/s x 60 x 60 x 24 = 3,628,800 GB/day, which while not exactly 4 million is at least in the ballpark. I'm guessing you missed the "million".
charlesmiller000
not rated yet Nov 25, 2012
To cdt: You are correct; I missed the million GB/DAY. Thanks.
gmurphy
not rated yet Nov 26, 2012
@alfie_null, such speeds are only necessary along the tier one provider trunk lines, once they branch out into geographically localised exit points, the data rate will have been significantly reduced.
TheSexyTaco
1 / 5 (2) Nov 27, 2012
Wow, imagine what could happen if every TeraBitPerSecond service package came with a mandatory "Open Mosix" software applet that you were required to install on your PC? Could you imagine the potential of a planet scale super-cluster transferring data at speeds that far exceed that of a normal computer motherboard bus? We could have the entire human genome mapped within a year at least!
chrisfc
not rated yet Nov 27, 2012
Now if only the file transfer protocols out there could keep up. Try transferring data using any TCP based protocol on that link with a few milliseconds of latency.

More news stories

Hackathon team's GoogolPlex gives Siri extra powers

(Phys.org) —Four freshmen at the University of Pennsylvania have taken Apple's personal assistant Siri to behave as a graduate-level executive assistant which, when asked, is capable of adjusting the temperature ...

Better thermal-imaging lens from waste sulfur

Sulfur left over from refining fossil fuels can be transformed into cheap, lightweight, plastic lenses for infrared devices, including night-vision goggles, a University of Arizona-led international team ...

Deadly human pathogen Cryptococcus fully sequenced

Within each strand of DNA lies the blueprint for building an organism, along with the keys to its evolution and survival. These genetic instructions can give valuable insight into why pathogens like Cryptococcus ne ...