Encryption is less secure than we thought

Aug 14, 2013 by Larry Hardesty
Muriel Médard is a professor in the MIT Department of Electrical Engineering. Credit: BRYCE VICKMARK

Information theory—the discipline that gave us digital communication and data compression—also put cryptography on a secure mathematical foundation. Since 1948, when the paper that created information theory first appeared, most information-theoretic analyses of secure schemes have depended on a common assumption.

Unfortunately, as a group of researchers at MIT and the National University of Ireland (NUI) at Maynooth, demonstrated in a paper presented at the recent International Symposium on Information Theory (view PDF), that assumption is false. In a follow-up paper being presented this fall at the Asilomar Conference on Signals and Systems, the same team shows that, as a consequence, the wireless card readers used in many keyless-entry systems may not be as secure as previously thought.

In information theory, the concept of information is intimately entwined with that of . Two digital files might contain the same amount of information, but if one is shorter, it has more entropy. If a compression algorithm—such as WinZip or gzip—worked perfectly, the compressed file would have the maximum possible entropy. That means that it would have the same number of 0s and 1s, and the way in which they were distributed would be totally unpredictable. In information-theoretic parlance, it would be perfectly uniform.

Traditionally, information-theoretic analyses of secure schemes have assumed that the source files are perfectly uniform. In practice, they rarely are, but they're close enough that it appeared that the standard still held.

"We thought we'd establish that the basic premise that everyone was using was fair and reasonable," says Ken Duffy, one of the researchers at NUI. "And it turns out that it's not." On both papers, Duffy is joined by his student Mark Christiansen; Muriel Médard, a professor of electrical engineering at MIT; and her student Flávio du Pin Calmon.

The problem, Médard explains, is that information-theoretic analyses of secure systems have generally used the wrong notion of entropy. They relied on so-called Shannon entropy, named after the founder of , Claude Shannon, who taught at MIT from 1956 to 1978.

Shannon entropy is based on the average probability that a given string of bits will occur in a particular type of digital file. In a general-purpose communications system, that's the right type of entropy to use, because the characteristics of the data traffic will quickly converge to the statistical averages. Although Shannon's seminal 1948 paper dealt with cryptography, it was primarily concerned with communication, and it used the same measure of entropy in both discussions.

But in cryptography, the real concern isn't with the average case but with the worst case. A codebreaker needs only one reliable correlation between the encrypted and unencrypted versions of a file in order to begin to deduce further correlations. In the years since Shannon's paper, information theorists have developed other notions of entropy, so¬me of which give greater weight to improbable outcomes. Those, it turns out, offer a more accurate picture of the problem of codebreaking.

When Médard, Duffy and their students used these alternate measures of entropy, they found that slight deviations from perfect uniformity in source files, which seemed trivial in the light of Shannon entropy, suddenly loomed much larger. The upshot is that a computer turned loose to simply guess correlations between the encrypted and unencrypted versions of a file would make headway much faster than previously expected.

"It's still exponentially hard, but it's exponentially easier than we thought," Duffy says. One implication is that an attacker who simply relied on the frequencies with which letters occur in English words could probably guess a user-selected password much more quickly than was previously thought. "Attackers often use graphics processors to distribute the problem," Duffy says. "You'd be surprised at how quickly you can guess stuff."

In their Asilomar paper, the researchers apply the same type of mathematical analysis in a slightly different way. They consider the case in which an attacker is, from a distance, able to make a "noisy" measurement of the password stored on a credit card with an embedded chip or a key card used in a keyless-entry system.

"Noise" is the engineer's term for anything that degrades an electromagnetic signal—such as physical obstructions, out-of-phase reflections or other electromagnetic interference. Noise comes in lots of different varieties: The familiar white noise of sleep aids is one, but so is pink noise, black noise and more exotic-sounding types of noise, such as power-law noise or Poisson noise.

In this case, rather than prior knowledge about the statistical frequency of the symbols used in a password, the attacker has prior knowledge about the probable noise characteristics of the environment: Phase noise with one set of parameters is more probable than phase noise with another set of parameters, which in turn is more probable than Brownian noise, and so on. Armed with these statistics, an attacker could infer the password stored on the card much more rapidly than was previously thought.

"Some of the approximations that we're used to making, they make perfect sense in the context of traditional communication," says Matthieu Bloch, an assistant professor of electrical and computer engineering at the Georgia Institute of Technology. "You design your system in a framework, and then you test it. But for crypto, you're actually trying to prove that it's robust to things you cannot test. So you have to be sure that your assumptions make sense from the beginning. And I think that going back to the assumptions is something people don't do often enough."

Bloch doubts that the failure of the uniformity assumption means that cryptographic systems in wide use today are fundamentally insecure. "My guess is that it will show that some of them are slightly less secure than we had hoped, but usually in the process, we'll also figure out a way of patching them," he says. The MIT and NUI researchers' work, he says, "is very constructive, because it's essentially saying, 'Hey, we have to be careful.' But it also provides a methodology to go back and reanalyze all these things."

The paper is titled "Brute force searching, the typical set, and guesswork."

Explore further: New 'Surveyman' software promises to revolutionize survey design and accuracy

More information: arxiv.org/pdf/1301.6356.pdf

Related Stories

Beefing up public-key encryption

Feb 18, 2013

Most financial transactions on the Internet are safeguarded by a cryptographic technique called public-key encryption. Where traditional encryption relies on a single secret key, shared by both sender and ...

Reliable communication, unreliable networks

Aug 05, 2013

Now that the Internet's basic protocols are more than 30 years old, network scientists are increasingly turning their attention to ad hoc networks—communications networks set up, on the fly, by wireless devices—where ...

Recommended for you

Russia turns back clocks to permanent Winter Time

11 hours ago

Russia on Sunday is set to turn back its clocks to winter time permanently in a move backed by President Vladimir Putin, reversing a three-year experiment with non-stop summer time that proved highly unpopular.

Remains of French ship being reassembled in Texas

Oct 24, 2014

A frigate carrying French colonists to the New World that sank in a storm off the Texas coast more than 300 years ago is being reassembled into a display that archeologists hope will let people walk over ...

User comments : 9

Adjust slider to filter visible comments by rank

Display comments: newest first

not rated yet Aug 14, 2013
So can we assume that hacking the first layer of encryption would be much harder if the information in it it was already encrypted?
4 / 5 (3) Aug 14, 2013
One time pads. I'm predicting it will come down to this as they are the (only?) encryption scheme that is guaranteed mathematically unbreakable without the key.

...with some serious drawbacks:
1) They triple the data you have the data to be encrypted and two copies of the key with the same length as that data - one at the sender and one at the recipient.
2) It's only good for P2P.
3) You need to be able to securely exchange the key beforehand.
4) The key must never be reused.
not rated yet Aug 14, 2013
Chrome, Firefox (all platforms) and Internet Explorer (Vista or later) support perfect forward secrecy using elliptic curve ephemeral Diffie-Hellman. ECC affords significant security improvements over regular RSA. Just need to get site owners to turn it on.

Geometric crypto in general is looking pretty hot.
not rated yet Aug 14, 2013
So can we assume that hacking the first layer of encryption would be much harder if the information in it it was already encrypted?

Multi-pass DES is used by a lot of folks; how effective it is at increasing security is a somewhat controversial topic: https://groups.go...9eKqEiAJ
not rated yet Aug 14, 2013
One time pads...

But the key objective of modern encryption is to make it possible for people to exchange secret, authenticated messages without having to securely share other content like one-time-pads. And one-time pads don't help at all for the most common encryption tasks like enabling secure web protocols.
1 / 5 (2) Aug 14, 2013
"Although Shannon's seminal 1948 paper dealt with cryptography, it was primarily concerned with communication, and it used the same measure of entropy in both discussions."

Read more at: http://phys.org/n...html#jCp

The above is incorrect.

"The information theory from that of Shannon involves the resolution of uncertainties concerning final outcome in the face of a repertoire of possible occurrences, the occurrences possibly varying in their probability of occurrence. The accent is on hypothetical TRANSMISSION of events defined abstractly, so that a temporal aspect is given to the theory. Thus, his work may more properly be termed TRANSMISSION theory - to distinguish it from information theory based on energy distribution - a distinction he was aware of."

T.W. Barrett

Journal of Sound and Vibration (1972) 20 (3), 407-412
Department of Physiology and Biophysics
University of Tennessee Medial Units, Memphis
1 / 5 (3) Aug 15, 2013
Multiple iterations of the same encryption algorithm in no way guarantee increased robustness against attacks. Take the DES algorithm as an example. Although 3DES (3 passes of DES) is an incredibly widely used standard, 2DES (2 passes) is considered easier to break than just a single pass.

The very best attacks against encryption almost always attack the implementation as opposed to the algorithm. A good example is WEP wifi. At its core it uses the same ARC4 algorithm as https/SSL. However, the exceptionally weak implementation of IVRs makes WEP crackable, regardless of bit depth, in minutes on a modern CPU. Recently there was an SSL attack called Beast that allowed attackers to unencrypt a portion of https traffic, so that implementation is far from perfect itself.

Finally, encryption bit rates don't mean much as you may think. You would be better off with a FIPS-140-3 AES-256 bit encrypted device than you would with one that uses Blowfish 1024 bit.
1 / 5 (4) Aug 15, 2013
2.3 / 5 (3) Aug 15, 2013
How does this help remove Ball Boy Ballmer from Microsoft and putting him in a zoo where he belongs?