Perfect communication with imperfect chips

Aug 05, 2011
Perfect communication with imperfect chips
Graphic: Christine Daniloff

One of the triumphs of the information age is the idea of error-correcting codes, which ensure that data carried by electromagnetic signals — traveling through the air, or through cables or optical fibers — can be reconstructed flawlessly at the receiving end, even when they’ve been corrupted by electrical interference or other sources of what engineers call “noise.”

For more than 60 years, the analysis of error-correcting codes has assumed that, however corrupted a signal may be, the circuits that decode it are error-free. In the next 10 years, however, that assumption may have to change. In order to extend the battery life of portable computing devices, manufacturers may soon turn to low-power signal-processing circuits that are themselves susceptible to noise, meaning that errors sometimes creep into their computations.

Fortunately, in the July issue of IEEE Transactions on Information Theory, Lav Varshney PhD ‘10, a research affiliate at MIT’s Research Laboratory of Electronics, demonstrates that some of the most commonly used codes in telecommunications can still ensure faithful transmission of information, even when the decoders themselves are noisy. The same analysis, which is adapted from his MIT thesis, also demonstrates that memory chips, which present the same trade-off between energy efficiency and reliability that signal-processing chips do, can preserve data indefinitely even when their circuits sometimes fail.

According to the semiconductor industry’s 15-year projections, both memory and computational circuits “will become smaller and lower-power,” Varshney explains. “As you make circuits smaller and lower-power, they’re subject to noise. So these effects are starting to come into play.”

Playing the odds

The theory of error-correcting codes was established by Claude Shannon — who taught at MIT for 22 years — in a groundbreaking 1948 paper. Shannon envisioned a message sent through a communications medium as a sequence of bits — 0s and 1s. Noise in the channel might cause some of the bits to flip or become indeterminate. An error-correcting code would consist of additional bits tacked on to the message bits and containing information about them. If message bits became corrupted, the extra bits would help describe what their values were supposed to be.

The longer the error-correcting code, the less efficient the transmission of information, since more total bits are required for a given number of message bits. To date, the most efficient codes known are those discovered in 1960 by MIT professor emeritus Robert Gallager, which are called low-density parity-check codes — or sometimes, more succinctly, Gallager codes. Those are the codes that Varshney analyzed.

The key to his new analysis, Varshney explains, was not to attempt to quantify the performance of particular codes and decoders but rather to look at the statistical properties of whole classes of them. Once he was able to show that, on average, a set of noisy decoders could guarantee faithful reconstruction of corrupted data, he was then able to identify a single decoder within that set that met the average-performance standard.

The noisy brain

Today, updated and optimized versions of Gallager’s 1960 codes are used for error correction by many cellphone carriers. Those codes would have to be slightly modified to guarantee optimal performance with noisy circuits, but “you use essentially the same decoding methodologies that Gallager did,” Varshney says. And since the codes have to correct for errors in both transmission and decoding, they would also yield lower transmission rates (or require higher-power transmitters).

Shashi Chilappagari, an engineer at Marvell Semiconductor, which designs signal-processing chips, says that like Gallager’s codes, the question of whether noisy circuits can correct transmission errors dates back to the 1960s, when it attracted the attention of computing pioneer John von Neumann. “This is kind of a surprising result,” Chilappagari says. “It’s not very intuitive to say that this kind of scheme can work.” Chilappagari points out, however, that like most analyses of error-correcting codes, Varshney’s draws conclusions by considering what happens as the length of the encoded messages approaches infinity. Chipmakers would be reluctant to adopt the coding scheme Varshney proposes, Chilappagari says, without “time to test it and see how it works on a given-length code.”

While researching his thesis, Varshney noticed that the decoder for Gallager’s codes — which in fact passes data back and forth between several different decoders, gradually refining its reconstruction of the original message — has a similar structure to ensembles of neurons in the brain’s cortex. In ongoing work, he’s trying to determine whether his analysis of error-correcting codes can be adapted to characterize information processing in the brain. “It’s pretty well established that neural things are noisy — neurons fire randomly … and there’s other forms of noise as well,” Varshney says.

Explore further: A new kind of data-driven predictive methodology

More information: DOI: 10.1109/TIT.2011.2145870

Related Stories

Explained: The Shannon limit

Jan 19, 2010

It's the early 1980s, and you’re an equipment manufacturer for the fledgling personal-computer market. For years, modems that send data over the telephone lines have been stuck at a maximum rate of 9.6 kilobits ...

Explained: Gallager codes

Jan 21, 2010

In the 1948 paper that created the field of information theory, MIT grad (and future professor) Claude Shannon threw down the gauntlet to future generations of researchers. In those predigital days, communications ...

UA engineer designs better error-correction code

Oct 25, 2010

(PhysOrg.com) -- One company already has licensed the technology from the UA, and patents are pending to meet growing computer industry demand for the error-correction algorithm developed by Bane Vasic.

Long live the qubit!

Jun 02, 2011

A quantum computer is a device -- still largely theoretical -- that could perform some types of calculations much more rapidly than classical computers. While a bit in a classical computer can represent either ...

First International Conference on Quantum Error Correction

Oct 01, 2007

Quantum error correction of decoherence and faulty control operations forms the backbone of all of quantum information processing. In spite of remarkable progress on this front ever since the discovery of quantum error correcting ...

Researcher finds optimal fix-free codes

Apr 03, 2009

(PhysOrg.com) -- More than 50 years after David Huffman developed Huffman coding, an entropy encoding algorithm used for lossless data compression in computer science and information theory, an electrical ...

Recommended for you

Five ways the superintelligence revolution might happen

Sep 26, 2014

Biological brains are unlikely to be the final stage of intelligence. Machines already have superhuman strength, speed and stamina – and one day they will have superhuman intelligence. This is of course ...

User comments : 0