Computers Faster Only for 75 More Years? Physicists determine nature's limit to making faster processors

October 14, 2009 By Lauren Schenkman, Inside Science
Image credit: ZyMOS

With the speed of computers so regularly seeing dramatic increases in their processing speed, it seems that it shouldn't be too long before the machines become infinitely fast -- except they can't.

A pair of physicists has shown that computers have a speed limit as unbreakable as the speed of light. If processors continue to accelerate as they have in the past, we'll hit the wall of faster processing in less than a century.

Intel co-founder Gordon Moore predicted 40 years ago that manufacturers could double computing speed every two years or so by cramming ever-tinier on a chip. His prediction became known as Moore's Law, and it has held true throughout the evolution of computers -- the fastest today beats out a ten-year-old competitor by a factor of about 30.

If components are to continue shrinking, physicists must eventually code bits of information onto ever smaller particles. Smaller means faster in the microelectronic world, but physicists Lev Levitin and Tommaso Toffoli at Boston University in Massachusetts, have slapped a speed limit on computing, no matter how small the components get.

"If we believe in Moore's laW ... then it would take about 75 to 80 years to achieve this quantum limit," Levitin said.

"No system can overcome that limit. It doesn't depend on the physical nature of the system or how it's implemented, what algorithm you use for computation … any choice of hardware and software," Levitin said. "This bound poses an absolute law of nature, just like the speed of light."

Scott Aaronson, an assistant professor of and computer science at the Massachusetts Institute of Technology in Cambridge, thought Levitin's estimate of 75 years extremely optimistic.

Moore's Law, he said, probably won't hold for more than 20 years.

In the early 1980s, Levitin singled out a quantum elementary operation, the most basic task a quantum computer could carry out. In a paper published today in the journal Physical Review Letters, Levitin and Toffoli present an equation for the minimum sliver of time it takes for this elementary operation to occur. This establishes the speed limit for all possible computers.

Using their equation, Levitin and Toffoli calculated that, for every unit of energy, a perfect quantum computer spits out ten quadrillion more operations each second than today's fastest processors.

"It's very important to try to establish a fundamental limit -- how far we can go using these resources," Levitin explained.

The physicists pointed out that technological barriers might slow down Moore's law as we approach this limit. Quantum computers, unlike electrical ones, can't handle "noise" -- a kink in a wire or a change in temperature can cause havoc. Overcoming this weakness to make a reality will take time and more research.

As computer components are packed tighter and tighter together, companies are finding that the newer processors are getting hotter sooner than they are getting faster. Hence the recent trend in duo and quad-core processing; rather than build faster processors, manufacturers place them in tandem to keep the heat levels tolerable while computing
speeds shoot up. Scientists who need to churn through vast numbers of calculations might one day turn to superconducting computers cooled to drastically frigid temperatures. But even with these clever tactics, Levitin and Toffoli said, there's no getting past the fundamental speed limit.

Aaronson called it beautiful that such a limit exists.

"From a theorist's perspective, it's good to know that fundamental limits are there, sort of an absolute ceiling," he said. "You may say it's disappointing that we can't build infinitely fast computers, but as a picture of the world, if you have a theory of physics allows for
infinitely fast computation, there could be a problem with that theory."

© Inside Science News Service, Used with permission.

ISNS

Explore further: Taking Computers to the Quantum Level

Related Stories

Taking Computers to the Quantum Level

May 9, 2006

“If Moore’s Law holds for another 10-15 years,” says Dr. Raymond Laflamme, “we’ll have transistors the size of atoms.” Laflamme is a physicist at the University of Waterloo in Ontario, Canada. He is part of a ...

Optical computer made from frozen light

April 12, 2005

Scientists learn to process information with 'frozen light' Scientists at Harvard University have shown how ultra-cold atoms can be used to freeze and control light to form the "core" – or central processing unit – of ...

Quantum Computer Science on the Internet

July 31, 2004

A simulated quantum computer went online on the Internet last month. With the ability to control 31 quantum bits, it is the most powerful of its type in the world. Software engineers can use it to test algorithms that might ...

Ion trap quantum computing

May 12, 2009

(PhysOrg.com) -- “Right now, classical computers are faster than quantum computers,” René Stock tells PhysOrg.com. “The goal of quantum computing is to eventually speed up the time scale of solving certain important ...

Recommended for you

Time crystals—how scientists created a new state of matter

February 22, 2017

Some of the most profound predictions in theoretical physics, such as Einstein's gravitational waves or Higgs' boson, have taken decades to prove with experiments. But every now and then, a prediction can become established ...

Science versus the 'Horatio Alger myth'

February 22, 2017

In a new study published today in the journal PLOS ONE, Los Alamos National Laboratory scientists have taken a condensed matter physics concept usually applied to the way substances such as ice freeze, called "frustration," ...

32 comments

Adjust slider to filter visible comments by rank

Display comments: newest first

SmartK8
2.3 / 5 (3) Oct 14, 2009
Well, there's always a physical size, or parallel number of the CPUs (or whatever it will be called later on), or even just more computers alone. But I don't personally believe, that this limit is it. Anyway, 40 times doubling the current performance, it's unimaginable! I really wonder what kind of software it will be running; if any. Somehow, just based on this sheer number, I take Kurzweil more seriously from now on.
El_Nose
5 / 5 (3) Oct 14, 2009
I am really upset that the proposed limit is not stated.
fuzz54
1 / 5 (1) Oct 14, 2009
Speed is great. Looks like computers will be somewhere between a billion and 100 billion times faster before we hit the wall. But we should be designing smarter, not necessarily faster. Look at what the human brain does with much less wattage. If computers really do get a billion times faster then we are looking at a seriously huge power bill.
denfire
not rated yet Oct 14, 2009
ummm.... this would be an awesome website if it were a tad bit more technical.... i agree with the NOSE... can i get a speed limit here... or a formula!
nkalanaga
5 / 5 (2) Oct 14, 2009
"a perfect quantum computer spits out ten quadrillion more operations each second than today's fastest processors."

Now, all one needs is the speed of today's fastest processor. Parallel processors don't count, only individual, one instruction at a time, processors.
gwrede
4.4 / 5 (14) Oct 14, 2009
"Anyway, 40 times doubling the current performance, it's unimaginable!" Yes. 25 years ago I was using a Kaypro-II, 0.002GHz 8-bit CPU, 0.000064 Gigs of RAM, and no hard drive. Today my portable HD takes a Terabyte of data. That's enough to store the name, address and phone number of every human being on earth.

The funny thing is, that old Kaypro did office tasks just as fast as today's Excel and Word. I also have a 10-year old laptop with Windows-98 and Office-97. I actually prefer writing and calculating with it, since it is more responsive and feels faster than today's Win+Office on the latest laptops.

40 times doubling the performance would be unimaginable, yes. But I bet that the Windows and Office of the day will quite effortlessly waste enough horse power to make work even slower than today.
jselin
4.7 / 5 (6) Oct 14, 2009
40 times doubling the performance would be unimaginable, yes. But I bet that the Windows and Office of the day will quite effortlessly waste enough horse power to make work even slower than today.


That stupid paperclip will probably be churning away rewriting your paper for you "Did you mean to say (40 pages rewritten)? Your intended audience will find my version 47% more understandable..."
vladik
4.8 / 5 (4) Oct 14, 2009

The funny thing is, that old Kaypro did office tasks just as fast as today's Excel and Word.


Totally agree, developers were constrained by 2-3MHZ 8 bit CPUs + tiny amount of RAM, and the algorithms, written in pure assembler, had absolutely magic optimizations. Apply that kind of effort to current software, and you'll get desired 40 times gain in speed with the same hardware.
dirk_bruere
not rated yet Oct 14, 2009
Don't any of these researchers do any reading? There is a large bulk of literature on the limits to computing. Try this for starters:
http://ftp.nada.k...Brains2/
USPorcupine
2 / 5 (9) Oct 14, 2009
There is no such thing as a speed limit in the computer world. It just goes to show how some of our scientists are shortsighted. We're using binary right now, which means the code has to go in order 100110100110010110 etc. All we have to do is use the color spectrum instead of electricity (which involves positive/negative charges ..0 or 1 ). If we use the color spectrum instead of electricity, instead of sending 001111111011001
011011100110000111010101001100001111001010101100 to the processor for example, we would just send the shade of green in one blip. There is a lot of room for speed. And there is a lot of other tricks we can use in the computer world.
So once again, THERE IS NO SUCH THING AS A SPEED LIMIT.
dirk_bruere
5 / 5 (1) Oct 14, 2009
Limits to computing in terms of energy for non-reversible computation is about 10^21 bit change operations per Watt at room temp. For 1kg of matter compressed into a Black Hole the lifetime would be around 10^-19 s, during which 10^31 operations can be performed on 10^16 bits. Residual energy dissipation would be about 40 megatonne equivalent.
hooloovoo
3.3 / 5 (7) Oct 14, 2009
The funny thing is, that old Kaypro did office tasks just as fast as today's Excel and Word.


Can your old machine convert a movie between HD video formats in a matter of minutes (or at all)? Can it sort through and analyze data coming in at the rate of GB/s? Can it perform any of the functions that modern computers do OTHER than office tasks?

I thought not. You're seriously oversimplifying things, and I think you know it.
chg9389
5 / 5 (8) Oct 14, 2009
Fortunately, Microsoft is working on this problem and is frantically researching ways to keep Windows bloated enough to absorb whatever new computing power becomes available!
Yogaman
5 / 5 (2) Oct 14, 2009
Since the cycle-frugal days of 8080's, we've found plenty of ways to spend Moore's bounty.

The old office tasks were much more constrained in terms of touchy-feely things like styles, colors, graphics and layout, productivity things like accessing data from other documents over (inter-) networks, etc. And dot-matrix printing was expensive, slow and ugly.

Now we have significant aesthetic and cost improvements: [Ooh, Aero! Pretty, pretty! Me want! Me want check my Facebook friends in separate tab during this boring YouTube video.]

But I think there are several orders of magnitude left before human demand for compute cycles is satiated.

[Now that we've past Uncanny Valley, me want faster WiMAX to experience 3D-HD roboporn with fully interactive brain implants & bodysuit, in back seat while robot drives.]

Maybe by that time, other, more autonomous robots will be smart enough to put us out of our misery.
trantor
5 / 5 (1) Oct 15, 2009
alternatively, you can just build huge quantum computers and place them in hyperspace, leaving them connected to small input/output in our universe. That way, there is no speed limit, since you can build BIGGER.

At least that was what Isaac Asimov did with the Multivac at his short story THE LAST QUESTION.
Bob_Kob
not rated yet Oct 15, 2009
Hyperspace solves every problem. I don't understand why we don't drop what were doing and try to get hyperspace a reality asap :)
Paradox
5 / 5 (1) Oct 15, 2009
Fortunately, Microsoft is working on this problem and is frantically researching ways to keep Windows bloated enough to absorb whatever new computing power becomes available!


HaHaHaha! That's Funny!

Seriously though, this article is kind of like the "Man will never fly" statement. Someone will figure out a way around that limit. No question about it in my mind.
Bob_Kob
1 / 5 (1) Oct 15, 2009
Well in my mind there is no way around the speed of light, so the time taken for signals to reach other parts of the chip or chips is the limiting factor.
Mister_Sinister
1.3 / 5 (3) Oct 15, 2009
Im not sure if you people are aware, but NOTHING goes faster than light - had that happened all our physics would be wrong, which is not since we managed so much with its correct predictions. Porcupine suggested a photonics method - but that's not faster than light, because that is light itself. No useful information can travel faster than light - not even time can travel faster or slower than light since light is its metronome. Remember the lenght of 1 second is 10^8 meters - the speed of ligth in one second.
CreepyD
4 / 5 (1) Oct 15, 2009
But we can transfer the same data more efficiently.
2 Computers working at the speed of light. 1 using binary and using a more efficient form of communication. The binary one will be much much slower. It's not just about pure speed.
Bob_Kob
3 / 5 (2) Oct 15, 2009
The article clearly stated even using the most efficient method possible, in the end you are still sending packets of data which take time to travel.
Royale
not rated yet Oct 15, 2009
@trantor
I really like the Asimov reference. Hopefully we can get there in my lifetime! (Although I doubt it. I think we'll end up in a 'state of the art & stone age' world at the same time, due to war, ultimately driven by religious bickering). Getting to the level you suggest would be amazing though. Asimov could have been right. Perhaps the big bang came about through science becoming ALMOST infinite in its processing capabilities.
3432682
1 / 5 (2) Oct 15, 2009
The future will be FPGA - field programmable gate array - instead of computers which reset their CPUs on every cycle. FPGA do cycles, but instead of changing the core on every cycle, the computation flows through a series of circuits, finding answers in single cycles instead of thousands. The limits will be the size of the FPGA circuits, and the handling of I/O.
Damon_Hastings
not rated yet Oct 16, 2009
The limit in this article only applies to a single processor. As this limit is approached, the competition is also on to use fewer and fewer particles per processor, connect all those processors in more and more seamless ways, and dissipate heat efficiently. We might get these processors down to where they're smaller than a virus and can be packed together tighter than a drum. Then we just have to make the whole system the size of a galaxy (gravitational collapse notwithstanding). And then ask it to find a way to ring up God. :-)
zilog4082
not rated yet Oct 16, 2009
@gwrede
""Anyway, 40 times doubling the current performance, it's unimaginable!" Yes. 25 years ago I was using a Kaypro-II, 0.002GHz 8-bit CPU, 0.000064 Gigs of RAM, and no hard drive."

You faced just 12 times doubling that performance, it is around us now.

40 times doubling that (25 years old) performance would be 2.2 EHz (ExaHertz or 2199023255 GHz). EM radiation wavelength at that (2.2 EHz) frequency is less then 0.14 nm. To compensate for those 25 years today, multiply this figure by 4096.

2^35 or 2^40 is such improvement that we never faced it since begining of computing.
nxtr
5 / 5 (1) Oct 16, 2009
What the "geniuses" who always love to predict the future fail to take into consideration is that the process of making a processor faster and smaller is not limited to their tunnel-vision concepts of processors.

By using trinary or quaternary bits in 3D DNA style arrangements, you would increase the processing power by a cubic factor, and reduce its size the same amount.

Its like someone in the 20's saying that waterships will never exceed 50-60 KPH because of cavitation around the propellor. Jet propulsion? Oh, gee I guess they didn't think of that when they confidently made their decree. Why? Because propellors were all they knew. Jet propulsion wasn't INVENTED yet.

Why must these people opine so convincingly when their thought processes are trapped in 2009. Tragic.
gwrede
1 / 5 (2) Oct 17, 2009
@zilog4082
2^35 or 2^40 is such improvement that we never faced it since begining of computing

True. But consider this: when i was in elementary school (early sixties), we didn't have calculators, so to do a floating point multiplication on pencil and paper (with, say, a dozen digits of precision) took some three minutes, for non-engineers. Today my desktop does 7Gflops, which is a 2^40 improvement in speed!

At that time we could have said that "we never faced it since begining of computing", too. After all, never before had so many children had the ability to do that kind of arithmetic.

Nobody thought we'd see a 2^40 improvement in 40 years.

Imagine what seti@home &co could do another 40 years from now! All the worlds "unimaginable" hardware combined to solve some amazing task! (But, please, not "does God exist?")
MorituriMax
1 / 5 (2) Oct 17, 2009
Not to be simplistic, but don't all these limits described above presuppose that we are using and making the future computers in the same way we do now?

Sure, sailing ships can't and never will be able to reach orbit no matter how many masts we build into them. But rocket ships can.

Todays computers = sailing ships
Tommorrows computers = ?.
Bob_Kob
1 / 5 (1) Oct 18, 2009
But you guys dont understand, there is a limit! Perhaps not the same as in this article but the speed of light is something that cannot be broken.

Sure we can have unlimited powerful computers that are parallel, but they won't be fast. Their speed is limited.
woodland_spirit
not rated yet Oct 19, 2009
I know what software it will be using in 75 years time, Gnome Version 50. Guess what, in 80 years time it will be as sluggish as hell, then what will we do?
Paradox
not rated yet Oct 21, 2009
Well in my mind there is no way around the speed of light, so the time taken for signals to reach other parts of the chip or chips is the limiting factor.

Im not sure if you people are aware, but NOTHING goes faster than light - had that happened all our physics would be wrong, which is not since we managed so much with its correct predictions. Porcupine suggested a photonics method - but that's not faster than light, because that is light itself. No useful information can travel faster than light - not even time can travel faster or slower than light since light is its metronome. Remember the lenght of 1 second is 10^8 meters - the speed of ligth in one second.


If you actually read the article you will see that it is about Moores law, and not necessarily about the speed of light.
Mr_Man
not rated yet Oct 22, 2009
I am really upset that the proposed limit is not stated.


I was thinking the same thing after reading the article. It is like having a movie where there the conflict in the story has been edited out...

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.