Computing at the speed of light: Team takes big step toward much faster computers

May 18, 2015
The overhead view of a new beamsplitter for silicon photonics chips that is the size of one-fiftieth the width of a human hair. Credit: Dan Hixson/University of Utah College of Engineering

University of Utah engineers have taken a step forward in creating the next generation of computers and mobile devices capable of speeds millions of times faster than current machines.

The Utah engineers have developed an ultracompact beamsplitter—the smallest on record—for dividing light waves into two separate channels of information. The device brings researchers closer to producing silicon photonic chips that compute and shuttle data with light instead of electrons. Electrical and computer engineering associate professor Rajesh Menon and colleagues describe their invention today in the journal Nature Photonics.

Silicon photonics could significantly increase the power and speed of machines such as supercomputers, data center servers and the specialized computers that direct autonomous cars and drones with collision detection. Eventually, the technology could reach home computers and mobile devices and improve applications from gaming to video streaming.

"Light is the fastest thing you can use to transmit information," says Menon. "But that information has to be converted to electrons when it comes into your laptop. In that conversion, you're slowing things down. The vision is to do everything in light."

Photons of light carry information over the Internet through fiber-optic networks. But once a data stream reaches a home or office destination, the photons of light must be converted to electrons before a router or computer can handle the information. That bottleneck could be eliminated if the data stream remained as light within computer processors.

"With all light, computing can eventually be millions of times faster," says Menon.

Credit: Dan Hixson/University of Utah College of Engineering

To help do that, the U engineers created a much smaller form of a polarization beamsplitter (which looks somewhat like a barcode) on top of a silicon chip that can split guided incoming into its two components. Before, such a beamsplitter was over 100 by 100 microns. Thanks to a new algorithm for designing the splitter, Menon's team has shrunk it to 2.4 by 2.4 microns, or one-fiftieth the width of a human hair and close to the limit of what is physically possible.

The beamsplitter would be just one of a multitude of passive devices placed on a silicon chip to direct in different ways. By shrinking them down in size, researchers will be able to cram millions of these devices on a single chip.

Potential advantages go beyond processing speed. The Utah team's design would be cheap to produce because it uses existing fabrication techniques for creating . And because photonic chips shuttle photons instead of electrons, mobile devices such as smartphones or tablets built with this technology would consume less power, have longer battery life and generate less heat than existing .

The first supercomputers using —already under development at companies such as Intel and IBM—will use hybrid processors that remain partly electronic. Menon believes his beamsplitter could be used in those computers in about three years. Data centers that require faster connections between computers also could implement the technology soon, he says.

Explore further: Silicon photonics technology ready to speed up cloud and big data applications

More information: An integrated-nanophotonics polarization beamsplitter with 2.4 x 2.4µm2 footprint, Nature Photonics, DOI: 10.1038/nphoton.2015.80

Related Stories

Interaction between light and sound in nanoscale waveguide

February 17, 2015

Scientists from Ghent University and imec announce today that they demonstrated interaction between light and sound in a nanoscale area. Their findings elucidate the physics of light-matter coupling at these scales – and ...

New research lights the way to super-fast computers

November 7, 2014

New research published today in the journal Nature Communications, has demonstrated how glass can be manipulated to create a material that will allow computers to transfer information using light. This development could significantly ...

Recommended for you

A miniature laser-like device for surface plasmons

October 17, 2017

Researchers at ETH Zurich have developed a miniature device capable of producing laser-like beams of a particular kind of electromagnetic wave called a surface plasmon. Surface plasmons can be focused much more tightly than ...

Plasma optic combines lasers into superbeam

October 17, 2017

Since its introduction in the 1977 film "Star Wars," the Death Star has remained one of science fiction's most iconic figures. The image of Alderaan's destruction at the hands of the Death Star's superlaser is burned into ...

17 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

topkill
3 / 5 (2) May 18, 2015
This is all good, but not "millions of times faster" than today's computers. When an electron goes "into a circuit", another one pops out of the other end essentially near the speed of light. So not some giant net gain in the performance of the circuit itself.
El_Nose
3 / 5 (2) May 18, 2015
@topkill

I totally agree.. i think they were referring to exa-scale computing which if you start at at 1 GHz * 10^6 get you to 1 PHz I think that is the only plausible excuse for that sentence.

You are right computations are essentially at the speed of light now.. well a little slower... the promise of optical computing is a drastic reduction in power needed to do processing and an almost total elimination of heat. Remember we could make 10Ghz chips if we wanted but it would take too much power and you start to get into energy densities that rival nuclear reactors sitting in your living room by your feet or your lap
Protoplasmix
3 / 5 (2) May 18, 2015
What type of memory can compete with holography in terms of amount of data stored per physical volume, data integrity (each piece of a hologram is encoded with the entire hologram), and access time?

Useful qualities of a photon include intensity, frequency, phase, polarization, it's a boson, and it's massless.

The qualities of an electron that best those of a photon would be...?
lengould100
1 / 5 (1) May 19, 2015
I can see some processes which might gain a significant performance (speed) advantage from this technology, i.e. conversion of data moving to or from the processor to a communication fiber for any reason, and capability to implement much higher rate serial fiber communications links for a lot more purposes (controller to data head in a disk drive, connection of addressable arrays of disk drives to processor units, inter-processor communication in massively parallel processor implementations) in a system integration, but I agree with above, the result of this technology for average consumers / business users will certainly not be perceived overall system speed improvements in the 10^6+ range, as claimed in the promos.
marko
3 / 5 (1) May 19, 2015
Surely a more compact and potentially speedy design can be made with plasmonic waveguide logic.
antialias_physorg
5 / 5 (3) May 19, 2015
A major advantage is that photonic circuits produce not nearly as much heat as electric ones. Which in turn means you can fully leverage the third dimension when designing your components..
nathj72
3.7 / 5 (3) May 19, 2015
Silicon photonics is being perused as an interconnect technology. Not for processing data.

Photonic based circuits do not make sense for processing. The logic blocks have to be too large due to the fact your dealing with a particle that has a wavelength of 500nm or more. Intel chose 1310nm for the wavelength of light; probably because fiber has a low loss window there. While our electronic based logic blocks should be on the order of 10s of nanometers and shrinking. It is extremely unlikely that we would ever use an optical processor for anything but routing applications.
syndicate_51
1 / 5 (1) May 19, 2015
This is all good, but not "millions of times faster" than today's computers. When an electron goes "into a circuit", another one pops out of the other end essentially near the speed of light. So not some giant net gain in the performance of the circuit itself.


Now add different wavelengths of light in all those splitter's and you are looking at a mathematical formula that involves geometric permutations.

So ya millions of times faster!
Timelord
1 / 5 (1) May 19, 2015


This is all good, but not "millions of times faster" than today's computers. When an electron goes "into a circuit", another one pops out of the other end essentially near the speed of light. So not some giant net gain in the performance of the circuit itself.


Now add different wavelengths of light in all those splitter's and you are looking at a mathematical formula that involves geometric permutations.

So ya millions of times faster!


You're fully correct syndicate_51.

Not just different wavelengths , simply in direct COLOR transmission.
Transmission via Nanotubes between the components.
Transmission via 4 core fiber interconnecting different global systems.
Security is 100% , speed is around lightspeed. !
Take a look at MY setup.

https://communiti...eas/1017

It's computing with Nibbles and Quads within a Quantum System
pepe2907
5 / 5 (1) May 19, 2015
2400x2400 nanometers is quite huge compared with electronics and if that's close to the physical limit then photonics circuits although fast should be relatively large or relatively simple. It looks quite early to put them in cellphones. :)
pepe2907
not rated yet May 19, 2015
And b.t.w. for different wavelengths of light you should have different circuits /because those type of elements work only with a specific wavelength/. And unfortunately nobody yet imagined any type of compact interface to holographic memory. :)
Protoplasmix
not rated yet May 19, 2015
And unfortunately nobody yet imagined any type of compact interface to holographic memory.
um, technically you interface with it using a photon -- reference photon in, data encoded photon out, just that fast. Perhaps you meant interface as in with human physiology? Imagine a display the size of two contact lenses, having perfect optics (because optical elements are virtual in the software's maths*), giving the wearer better vision than an eagle and a display of data every bit as 3d-realistic as the human physiology can experience.

* Need to do sophisticated maths with the data encoded photon? Mix it with other photons (also known as interference) that have been encoded to provide a proper conformal mapping between the desired calculation and the mixing, to perform operations from simple addition to calculus to geometric algebra...
El_Nose
5 / 5 (1) May 20, 2015
@syndicate_51

you voted everyone a 1 you didn't agree with

The reason your answer is incorrect is because you would need a new core for each split wavelength. This is not the speed up you think it is. It is an example of the super high parallelism that optical computers will bring with multiplexing light. So you can only increase processing power -- not Speed in this fashion. This is the biggest advantage - coupled with very little heat. So optical computers will have a bandwidth that is huge but process the information at the same speed.

The drift velocity of electrons is vary small compared to the speed of light -- fortunately for us an electrical wave travels around 50 - 99% c this upper bound is close to / higher than the speed of light in air.

Next issue is timing ... info has to be timed to come out in order received, fast calculations mean nothing if you get it out of order of the slow ones.

This is why the "millions of times faster" is technically incorrect.

topkill
not rated yet May 20, 2015
@Syndicate and timelord,
You guys are both assuming things about what is possible. pepe and el_nose have both pointed out flaws in your assumptions and one of you has clearly voted a 1 to everyone that disagrees with you...without waiting to understand the debate and if you might be wrong.
Having a single component (a beamsplitter in this case) does not mean that 1) it handles every wavelength imaginable...or even more than one specific wavelength. 2) It also doesn't mean that the other components are available to make actual computational devices. 3) as nathj points out, the wavelengths of the light itself limit the size of the "circuits" so you can't jam as many into the same space. And 4) you're confusing the speed of electron drift with the practical speed of a signal passing through a wire.

You seem to be conflating quantum computers and qbits with optical computing. Very different things with very different strengths, weaknesses and hence practical applications.
Timelord
not rated yet May 21, 2015
@topkill.... first of all I don't downvote in technical issues....You have my ideas and views wrongly interpreted....Nothing to do with beamsplitters......I'm using a connection between elements with 4 (four) fibers or nanotubes in parallel transmission..... this give me ONE quad. pro single timeframe transmission.... 1 quad = 4 Nibbles. One Nibble (4 bits memory use, old school) can hold only 1 out of 4 colors at a single specific timeframe. or it is ZERO '0' which is .NOT. the same as a Quantum '0' .......When I use this system with the present fiber transmissions (in serial) the speed goes up by a mere 9.75 times.
Just take a good look in the Intel Communities. https://communiti...eas/1017
Protoplasmix
not rated yet May 21, 2015
So I heard, "There are 10 kinds of people in the world, those who understand binary and those who don't."

The full versatility of the photon is realized with an analog, quantum mechanical platform. See, for example, this article from APS.org - Focus: Measuring the Shape of a Photon where it's mentioned, "There is so much flexibility that a single photon could represent any letter in the alphabet, for example, or even a quantum combination (superposition) of several letters."

In today's binary machines, both memory and processing are discrete. With light, memory is a continuous 2d plane (and can act as its own registers and arithmetic logic units) and calculations from simple to complex are processed in a single pass instead of many machine cycles...
Moebius
not rated yet May 23, 2015
This is all good, but not "millions of times faster" than today's computers. When an electron goes "into a circuit", another one pops out of the other end essentially near the speed of light. So not some giant net gain in the performance of the circuit itself.


I'm not sure what's wrong with your reasoning besides being an oversimplification but I'm thinking there are reasons not mentioned in the article why it could be millions of times faster. For one thing speed is dependent on handling the medium, electrons vs photons, and the handling of photons could easily be vastly more efficient. Virtually every article I've ever read in the past also predicted ridiculously fast speeds. I'm guessing the guy wasn't exaggerating, associate professors generally speak precisely when referring to their subject of expertise.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.