Researchers create a new type of computer that can solve problems that are a challenge for traditional computers

Stanford researchers create new special-purpose computer that may someday save us billions
Post-doctoral scholar Peter McMahon, left, and visiting researcher Alireza Marandi examine a prototype of a new type of light-based computer. Credit: L.A. Cicero

The processing power of standard computers is likely to reach its maximum in the next 10 to 25 years. Even at this maximum power, traditional computers won't be able to handle a particular class of problem that involves combining variables to come up with many possible answers, and looking for the best solution.

Now, an entirely new type of that blends optical and electrical processing, reported Oct. 20 in the journal Science, could get around this impending processing constraint and solve those problems. If it can be scaled up, this non-traditional computer could save costs by finding more optimal solutions to problems that have an incredibly high number of possible solutions.

"This is a machine that's in a sense the first in its class, and the idea is that it opens up a sub-field of research in the area of non-traditional computing machines," said Peter McMahon, postdoctoral scholar in applied physics and co-author of the paper. "There are many, many questions that this development raises and we expect that over the next few years, several groups are going to be investigating this class of machine and looking into how this approach will pan out."

The traveling salesman problem

There is a special type of problem - called a combinatorial optimization problem - that traditional computers find difficult to solve, even approximately. An example is what's known as the "traveling salesman" problem, wherein a salesman has to visit a specific set of cities, each only once, and return to the first city, and the salesman wants to take the most efficient route possible. This problem may seem simple but the number of possible routes increases extremely rapidly as cities are added, and this underlies why the problem is difficult to solve.

"Those problems are challenging for standard computers, even supercomputers, because as the size grows, at some point, it takes the age of the universe to search through all the possible solutions," said Alireza Marandi, a former postdoctoral scholar at Stanford and co-author of the study. "This is true even with a supercomputer because the growth in possibilities is so fast."

It may be tempting to simply give up on the traveling salesman, but solving such hard optimization problems could have enormous impact in a wide range of areas. Examples include finding the optimal path for delivery trucks, minimizing interference in wireless networks, and determining how proteins fold. Even small improvements in some of these areas could result in massive monetary savings, which is why some scientists have spent their careers creating algorithms that produce very good approximate solutions to this type of problem.

An Ising machine

The Stanford team has built what's called an Ising machine, named for a mathematical model of magnetism. The machine acts like a reprogrammable network of artificial magnets where each magnet only points up or down and, like a real magnetic system, it is expected to tend toward operating at low energy.

The theory is that, if the connections among a network of magnets can be programmed to represent the problem at hand, once they settle on the optimal, low-energy directions they should face, the solution can be derived from their final state. In the case of the traveling salesman, each artificial magnet in the Ising machine represents the position of a city in a particular path.

Rather than using magnets on a grid, the Stanford team used a special kind of laser system, known as a degenerate optical parametric oscillator, that, when turned on, will represent an upward- or downward-pointing "spin." Pulses of the laser represent a city's position in a path the salesman could take. In an earlier version of this machine (published two years ago), the team members extracted a small portion of each pulse, delayed it and added a controlled amount of that portion to the subsequent pulses. In traveling salesman terms, this is how they program the machine with the connections and distances between the cities. The pulse-to-pulse couplings constitute the programming of the problem. Then the machine is turned on to try to find a solution, which can be obtained by measuring the final output phases of the pulses.

The problem in this previous approach was connecting large numbers of pulses in arbitrarily complex ways. It was doable but required an added controllable optical delay for each pulse, which was costly and difficult to implement.

Scaling up

The latest Stanford Ising machine shows that a drastically more affordable and practical version could be made by replacing the controllable optical delays with a digital electronic circuit. The circuit emulates the optical connections among the pulses in order to program the problem and the laser system still solves it.

Nearly all of the materials used to make this machine are off-the-shelf elements that are already used for telecommunications. That, in combination with the simplicity of the programming, makes it easy to scale up. Stanford's machine is currently able to solve 100-variable problems with any arbitrary set of connections between variables, and it has been tested on thousands of scenarios.

A group at NTT in Japan that consulted with Stanford's team has also created an independent version of the machine; its study has been published alongside Stanford's by Science. For now, the Ising machine still falls short of beating the of traditional digital computers when it comes to combinatorial optimization. But it is gaining ground fast and the researchers are looking forward to seeing what other work will be possible based on this breakthrough.

"I think it's an exciting avenue of exploration for finding alternative computers. It can get us closer to more efficient ways of tackling some of the most daunting computational problems we have," said Marandi. "So far, we've made a laser-based computer that can target some of these , and we have already shown some promising results."

Explore further

Google combines two main quantum computing ideas in one computer

More information: A fully-programmable 100-spin coherent Ising machine with all-to-all connections, Science, DOI: 10.1126/science.aah5178 ,
Journal information: Science

Citation: Researchers create a new type of computer that can solve problems that are a challenge for traditional computers (2016, October 20) retrieved 26 May 2019 from
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Feedback to editors

User comments

Oct 20, 2016
Why do computers need to be accurate to XXXXXXXXXXXXXXXXX digits? A reasonable guess is all that is possible in many situations, a computer could be made to provide it.

Oct 21, 2016
The TSP reminds me of the human brain architecture. I know I'm reaching here but could the human brain be a continuous running TSP with multi nodal changes between solving cycles?

Oct 21, 2016
@rrrander, it's exactly those kind of heuristics that are used to "solve" something like the Traveling Salesman problem, and the algorithms are pretty good, scaling to millions of destinations.

Oct 21, 2016
@rrrander, many problems require very high accuracy for good reasons, but that isn't related to this article at all: numerical algorithms can easily provide such accuracy on traditional computers.

The TSP is an entirely different class of problem: one that can be solved _exactly_ only by trying out every possible solution. The sheer number of solutions to check make it impractical to solve on traditional computers. There are many algorithms to find near-optimal solutions that work pretty well on traditional computers, but since it's so hard to find the real optimal solution, you can never be sure just how close you are to the best.

Oct 21, 2016
@Robert J_ Miskines, it's nothing like that - it's not like the brain tries to minimize the total amount of signalling happening inside, or minimizing the signalling pathways.

However, the architecture of the brain is well-suited to find near-optimal solutions to problems like TSP, and in fact some of the most successful algorithms in this area are modelled after the brain.

Oct 21, 2016
A reasonable guess is all that is possible in many situations, a computer could be made to provide it

The problem is that a "reasonable guess" at how a protein folds doesn't get the protein to fold in exactly the way you need. The difference between a complex molecule that is medically effectice and one that is not is not a matter of "close enough". It's a matter of: either it's exactly like THIS and it works...or it doesn't.

(Similarly in encryption/decryption...getting a key that is 'close' will not decrypt a message. Only the exact key will do)

TSP is just a standin for an entire class of problems. Don't take the pathing aspect of the problem too literally.

As for the article. I wonder how they prevent the machine from falling into local minima. Even magnetic fields can be configured into locally stable (but not globally optimal) configurations.

Oct 21, 2016

so three reasons really

1) we currently have two standards for precision - single precision and double precision.
single point precision is precise up to like 8 decimal places -- which is good enough for a 8th grader but you would start getting problems wrong in high school, think lod ln and trig problems. Also converting exchange rates would be wrong if you had three different currencies to switch between. Also - you could not calculate the new year for the Chinese calendar.

double precision gets you precision out to 16ish decimal places and solves all those normal everyday issues.

Oct 22, 2016
2) so how a computer computes decimal places is an issue -- remember back in the day when someone said that computers are binary systems -- well they are -- so in binary there are numbers smaller than 0 that cannot be expressed.

binary numbers less than 2 : 1/2 1/4 1/8 1/16 1/32 and so on those are all your options

to get 1/3 we trick the computer and use approximations to and a rule to guide us IEEE 9 something ) i forget

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more