(Phys.org)—Computer scientists Christos Sakellariou and Peter Bentley working together at University College in London, have built a new kind of computer that runs instruction segments randomly, rather than sequentially, resulting in a computer than in theory, should never crash.
One of the main reasons that computers crash is because of the way they execute instructions, i.e. sequentially. Code is written in a step-by-step fashion and the computer follows a counter that retrieves lines of code in the proper order, executing each one before moving on to the next. Problems come in when the counter becomes mixed up, or code that has been executed fails to return control so that the next line can be run. To get around that problem, the researchers in Britain have built a computer that doesn't run sequentially at all. It runs chunks of information that is made up of both code and data, and does so in random fashion, removing the sequential processing problem. The result, they say, is a computer that is able to repair itself on the fly, and won't theoretically ever crash.
The whole idea is based on nature's distributed error correction processing abilities as demonstrated by such brilliant constructs as the human brain. As people exist, they think, they react and respond. They do all manner of things, none of which occurs as the result of a sequential processer in a central part of the brain. Instead, things are done in a distributed manner, with different biological processors working on different things at the same time. To make this happen with a computer, the researchers built a Field Programmable Gate Array (FPGA) which is essentially a bit of electronics that serves as a sort of traffic cop. Its main job is to make sure that different segments or "systems" as the researchers call them, get called on, albeit, in random fashion, and to allocate a place for them to run. One of the benefits of such a system is that no system has to wait for another to finish before running, which means the computer can run several systems at the same time. Thus, the FPGA is a resource manager, though it also serves as the manager of information that flows between systems.
Because the systems are independent of one another, there is no crash if one of them is unable to carry out its instructions. But better than, that, other systems can be introduced into the system whose purpose is to detect problems with other systems and rerun them if necessary, or to change them slightly, if need be, to allow them to complete their assigned tasks. In the computer world, that's known as self-repairing code and it's something many people would like to see in computers running in the real world. With this new computer, it's been demonstrated that such a computer can be built.
Explore further: Innovative new supercomputers increase nation's computational capacity and capability
More information: www0.cs.ucl.ac.uk/staff/ucacpjb/SABEC2.pdf