Wireless data centers could be faster, cheaper, greener

September 27, 2012 by Bill Steele
A packet of data contains the geographic address of the server to which it should be delivered. Each server that receives a packet sends it on to the server most likely to be closer to the destination, whether that's on the other side of its own rack or on the next rack over.

(Phys.org)—Cornell computer scientists have proposed an innovative wireless design that could greatly reduce the cost and power consumption of massive cloud computing data centers, while improving performance.

In the "cloud," data is stored and processed in remote data centers. Economies of scale let cloud providers offer these services at far lower cost than buying and maintaining one's own equipment.

But data centers with tens of thousands of computers draw tens of thousands of kilowatts.

"Reducing power consumption would not only cut costs but would be a benefit to the environment," said Hakim Weatherspoon, assistant professor of computer science. Weatherspoon; Emin Gün Sirer, associate professor of computer science; graduate student Ji-Yong Shin; and Darko Kirovski of Microsoft Research have prepared a feasibility study for what they call a "Cayley Data Center," based on wireless networking and named for mathematician Arthur Cayley, who laid out in 1854 the mathematics they used in their design.

Their proposal is available in the Cornell eCommons information repository and will be presented at the Eighth ACM/IEEE Symposium on Architectures for Networking and Communications Systems Oct. 29-30 at the University of Texas in Austin.

The design was inspired by the availability of a new 60 gigahertz (GHz) wireless transceiver developed at Georgia Tech based on inexpensive CMOS chip technology. The transceiver transmits in a narrow cone, and 60 GHz radiation is quickly attenuated by the air and reaches only about 10 meters from the source, so the device can be used for short-range communication that will not interfere with other activity nearby.

Servers in a Cayley Data Center would be mounted in cylindrical racks. Each sever could send 60 GHz wireless signals to nearby racks or to other servers in the same rack.

In a conventional data center, are stacked in square racks like pizza boxes in a delivery truck. On top of every stack is a "switch"—a fairly expensive and power-hungry box that routs signals in and out of the servers and sends them off on wires to other servers, based on their electronic addresses.

In the proposed design, servers are mounted vertically in cylindrical racks several tiers high. Think of a wedding cake in which every tier is the same diameter, and one wedge-shaped slice of any tier represents a server. A 60 GHz transceiver is located at the outside and inside end of each server. With racks arranged in rows, each rack has line-of-sight wireless connectivity to eight other racks (except at the edges), and transceivers at the inner ends connect servers within the rack.

Instead of depending on switches, servers do their own routing, based on the physical location of the destination. Signals pass rack to rack, each time moving in the direction that looks like the shortest route across the floor. A Cayley Data Center would be more resistant to failure, the researchers said, because even if an entire rack died, signals could go around it. A simulation showed that 59 percent of servers in a center would have to fail before communication broke down.

Cost comparisons are difficult because the 60 GHz transceivers are not yet commercially available for data centers, but by making some guesses, the researchers suggest that the cost of wireless connectivity could be as low as 1/12 that of conventional switches and wires for a hypothetical data center with 10,000 servers. would go down by a similar amount, they said, and with no wires, maintenance would be easier.

"We argue that 60 GHz could revolutionize the simplicity of integrating and maintaining ," the researchers concluded in their paper.

Explore further: Bouncing signals off ceiling can rev up data centers

Related Stories

Bouncing signals off ceiling can rev up data centers

December 21, 2011

(PhysOrg.com) -- Researchers have a startlingly upbeat idea for data center managers coping with packed rooms, Internet traffic bursts, and high costs looming in having to reconfigure data center designs. The researchers ...

Sun Introduces New Metric for Server Efficiency

December 7, 2005

Evaluating a new server for your data center is no longer simply a matter of measuring raw performance. With today's increasing demands, you also need to consider how much power, air conditioning and space a server consumes. ...

Intel does math on oil-dunk test for cooler servers

September 3, 2012

(Phys.org)—Intel just finished a yearlong test of Green Revolution Cooling's mineral-oil server-immersion technology. Intel has tried immersing servers in the company's oil formulation to keep the servers cool and they ...

New Wireless 60 GHz Standard Promises Ultra-Fast Applications

January 15, 2009

(PhysOrg.com) -- Ultra-high-speed wireless connectivity - capable of transferring 15 gigabits of data per second over short distances - has taken a significant step toward reality. A recent decision by an international standards ...

A way to reduce the Internet's energy drain

May 28, 2012

(Phys.org) -- Swiss researchers at EPFL have developed a device intended for monitoring and saving the energy consumed by large data centers. It was developed in collaboration with Credit Suisse, which has used it to equip ...

Recommended for you

Microbes help turn Greek yogurt waste into fuel

December 13, 2017

Consumers across the world enjoy Greek yogurt for its taste, texture, and protein-packed punch. Reaching that perfect formula, however, generates large volumes of food waste in the form of liquid whey. Now researchers in ...


Adjust slider to filter visible comments by rank

Display comments: newest first

1 / 5 (2) Sep 27, 2012
Wouldn't it be awesome to prank such a datacenter by simply switching around the position of two stacks?
1 / 5 (3) Sep 27, 2012
60 GHz translates to roughly 6 GByte/sec transfer speed - not bad, but what's the latency of the network? Also, how do you diagnose routing issues in this network topology? Compare this to 100 Gbit Ethernet or Infiniband and it's a bit less revolutionary.
5 / 5 (1) Sep 27, 2012
"But data centers with tens of thousands of computers draw tens of thousands of kilowatts."
"Each sever could send 60 GHz wireless signals to nearby racks or to other servers in the same rack."

Sounds like there are tens of thousands of radio transmitters here -

How long can a system admin be here assuming there is a hardware failure on one server and you can't/don't want to shutdown all the servers in the rack plus those in ten meters radius?
3.5 / 5 (4) Sep 27, 2012
How long can a system admin be here assuming there is a hardware failure on one server and you can't/don't want to shutdown all the servers in the rack plus those in ten meters radius?

They don't need to be high power. At big events there's many people around you using transmitters WAY more powerfull than these (i.e. cell phones. Even while they are not in active use they communicate with the towers - which can be kilometers away). And you don't see people keeling over.
3.7 / 5 (3) Sep 27, 2012
60 GHz translates to roughly 6 GByte/sec transfer speed

How do you recon?

A single 60 GHz carrier wave can transmit 30 billion symbols per second, but one symbol doesn't have to be one byte in size. The amount of data you can carry depends on your signal to noise ratio, and the resolution at which you can distinguish differences in the recieved signal.

For example, a quadrature modulated signal can use two carrier waves very close to 60 GHz and the reciever compares the phase difference between the two waves. If they are 1 MHz apart, the recievers sees this constant phase difference, and when one of the carriers lags behind this difference changes. If you can detect the difference down to a single Hertz, you can transmit a symbol with about 20 bits of information.
1 / 5 (1) Sep 27, 2012
Or to be more precise, a quadrature reciever sees a constant phase drift between two different frequencies, unless they're exactly even multiples of one another. The speed of the drift depends on the difference of the two frequencies, and can be altered by advancing or retarding one or both carrier waves.

The same case can be made for traditional AM radio just as well. All you have to do is decide how loud the signal can be, and what is the most silent signal you can hear, and subdivide the space between those two into an arbitrary number of discreet steps. If your reciever sees 1 volt signal peak and can detect it down to a millivolt, with 1 millivolt resolution, then you can send 9 bits of information with every symbol.
not rated yet Sep 28, 2012
Could this be the next Bluetooth? Low power,high bandwidth, low range PAN connecting all my accessories... Glasses that do both AR and VR, rolled up OLED display, and cpu/mem/storage module hanging around my neck like dog tags..
1 / 5 (1) Sep 28, 2012
nano-scale, bio-compatible, biodegradable electronics?! Hello Johnny Mnemonic!


Humanity's knowledge of the living world around us is exploding. Research is good. Mass-manufacturing and application are not so good. You can not put the cat back in the bag. Find out more first.


Preexisting lifeforms that eat our manufactured toxins? Perhaps the tiger can be domesticated before it eats our children?
not rated yet Sep 28, 2012
As a data center guy, here is some non-theoretical stuff to consider:

1. People put IT equipment in data centers to mitigate risk. Put another way, people buy risk. Presenting how this technology mitigates current risk is paramount. People still build and overbuild data centers to mitigate risk (a lot of it perceived) and pay a lot for lower risk.

2. New technologies are always viewed as higher risk even if the factual basis for how they operate is undeniable.

3. The new technology cost savings will need to be spelled out in a way that would help current operators understand when the savings cover their costs of implementation/switch as well as the costs of abandoning a data center whose cost model was built over a 10 or 20 year term.

4. Most data center investors are real estate investors not technologists. If the technology can reduce cost, increase margins, compress cap rates, or provide some other significant (to them) financial benefit, then this will be adopted faster.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.