Researchers crack unassailable encryption algorithm in two hours

May 20, 2014 by Emmanuel Barraud
EPFL researchers crack unassailable encryption algorithm in two hours
Credit: thinkstockphotos

(Phys.org) —A protocol based on "discrete logarithms", deemed as one of the candidates for the Internet's future security systems, was decrypted by EPFL researchers. Allegedly tamper-proof, it could only stand up to the school machines' decryption attempts for two hours.

Without cryptography, no one would dare to type their on the Internet. Security systems developed to protect the communication privacy between the seller and the buyer are the prime targets for hackers of all kinds, hence making it necessary for encryption algorithms to be regularly strengthened. In universities, research in hacking aim mostly at testing their robustness.

Most of them rely on "discrete logarithm problems" – very complex mathematical operations – to secure data transmissions. "There are several variants, based for example on prime numbers or other mathematical formulas, explains Arjen Lenstra, director of the Laboratory for Cryptologic Algorithms (LACAL) at EPFL . "Their complexity is such that they are deemed as impossible to solve." Due to their effectiveness, the industry, especially banks, makes use of them for encoding their transactions. "The danger lies in the fact that these systems are based on principles that we do not fully understand, states Arjen Lenstra. If someone were to find out how to solve them all, the entire system would collapse."

In fact, a little over a year ago, cracks began to appear. Robert Granger, then at University College Dublin but who has since joined LACAL, showed that surprisingly the problem's first phase can be solved very easily. Antoine Joux, a French Scientist, independently had similar insights, and together with a French team then showed how to make the second and final phase `very nearly' easy. Then, many other cryptographers stepped in.

However, these methods only managed to overcome a very particular type of discrete logarithm. Nothing suggested that they could crack the industrial variants.

Therefore, an EPFL team, together with Jens Zumbragel from TU Dresden, focused on a "family" of algorithms presented as candidates for the next generation of encryption keys, which made use of "supersingular curves. "We proved that it would only take two hours for EPFL computers to solve a problem of this kind. Whereas it was believed that it would take 40'000 times the age of the universe for all computers on the planet to do it!" explains Thorsten Kleinjung, post-doctoral fellow at LACAL.

However, we need not panic; this system has not yet been deployed. EPFL researchers' success is not likely to be misused. "Basically, we just excluded this option from the search for a successor to current algorithms," said Arjen Lenstra, whose team will present its findings this August at the Crypto 2014 conference, which announced today that it accepted their submission to participate.

Explore further: New algorithm shakes up cryptography

More information: Robert Granger, Thorsten Kleinjung, Jens Zumbrägel. "Breaking `128-bit Secure' Supersingular Binary Curves (or how to solve discrete logarithms in F24⋅1223 and F212⋅367)." arXiv:1402.3668. arxiv.org/abs/1402.3668

add to favorites email to friend print save as pdf

Related Stories

New algorithm shakes up cryptography

May 16, 2014

Researchers at the Laboratoire Lorrain de Recherches en Informatique et ses Applications (CNRS/Université de Lorraine/Inria) and the Laboratoire d'Informatique de Paris 6 (CNRS/UPMC) have solved one aspect of the discrete ...

Flaw found in securing online transactions

Feb 16, 2012

Researchers on Wednesday revealed a flaw in the way data is scrambled to protect the privacy of online banking, shopping and other kinds of sensitive exchanges.

Experts uncover weakness in Internet security

Dec 30, 2008

Independent security researchers in California and researchers at the Centrum Wiskunde & Informatica (CWI) in the Netherlands, EPFL in Switzerland, and Eindhoven University of Technology (TU/e) in the Netherlands have found ...

The road to quantum computing

May 15, 2014

Anticipating the advent of the quantum computer, related mathematical methods already provide insight into conventional computer science.

Next question: can the NSA crack Tor keys?

Sep 09, 2013

(Phys.org) —"After more revelations, and expert analysis, we still aren't precisely sure what crypto the NSA can break. But everyone seems to agree that if anything, the NSA can break 1024 RSA/DH [DH refers ...

Recommended for you

Coping with floods—of water and data

Dec 19, 2014

Halloween 2013 brought real terror to an Austin, Texas, neighborhood, when a flash flood killed four residents and damaged roughly 1,200 homes. Following torrential rains, Onion Creek swept over its banks and inundated the ...

Cloud computing helps make sense of cloud forests

Dec 17, 2014

The forests that surround Campos do Jordao are among the foggiest places on Earth. With a canopy shrouded in mist much of time, these are the renowned cloud forests of the Brazilian state of São Paulo. It is here that researchers ...

User comments : 29

Adjust slider to filter visible comments by rank

Display comments: newest first

Expiorer
1.5 / 5 (8) May 20, 2014
Typing in credit card numbers was a problem before Bitcoin solved it.
IamVal
1.2 / 5 (10) May 20, 2014
when is the cryptography community going to learn..
the only way to ensure a safe encryption is to use the data to be encrypted as the key and have enough proceedurally unparalellizable data transformations so as to make automating and bruteforce impractical, of course redundant loops of these sufficiently complex processies is the next step. hide the key as a salt into multiple steps decrypted with a set from the newly encrypted data, where the insertion doesn't affect that set..

that is to say, the key has to be in the complexity of the steps required to transform it, not in a physical set of text. where it takes minutes or hours to decrypt on a home computer when you know the key which has to be simple and memorizeable.

I'm an amateur cryptographer. the problems with this system is of-course exponential inflation. until recently, storing more data than necessary, and wasting flops on cryptography was an insane business practice.
antialias_physorg
3.7 / 5 (9) May 20, 2014
the only way to ensure a safe encryption is to use the data to be encrypted as the key

That sounds stupid: because your key is then the ultimate crib.

and have enough proceedurally unparalellizable data transformations

That's the point of all cryptoalgorithms - and was even the point of those based on the supersingular curves

hide the key as a salt into multiple steps decrypted with a set from the newly encrypted data, where the insertion doesn't affect that set..

Using the same cryptographic method multiple trimes weakens the result - not strengthens it.

The only really (mathemtaically) safe method is a one time pad. For that you need safe transmission of the key - and that is exactly how quantum cryptography works. It does not require exponential inflation of data (merely a doubling of the data you want to send).
robert_platter
1.3 / 5 (4) May 20, 2014
Why not take the encrypted data and merge it into a unique photograph similar to Stenography. But without the original source picture there is no way of knowing if the LS bit was a one or a zero before the merge. This would allow storage of data in an open format but the data would be very difficult to decrypt without both the initial key and the photo that the LS bits were streamed from. Basically using the least significant bits of the photo as a second key. I imagine photos would provide a great source of 'random' bits. Even similar photos should be enough off to create a unique bit stream.

Whydening Gyre
5 / 5 (1) May 20, 2014
oops
antialias_physorg
3.8 / 5 (8) May 20, 2014
Why not take the encrypted data and merge it into a unique photograph similar to Stenography.

That's called "security by obscurity" (which is the worst possible way to do safe data transfer)
If you store it in the way you suggest you are also taking the steganography out of steganography (i.e. you're storing the data in plain sight...so you now have reduced it from "the worst possible way to hide data" to "not hiding data at all")

I imagine photos would provide a great source of 'random' bits.

Photos are anything but random. There is huge average correlation between neighboring pixels. Not least because of the mechanism by which the photo is taken (CCD chips)

If you want to "do random" use radioactive decay signals or similar.

Cryptography is nothing for that layman, Don't fool yourselves into thinking you could devise a secure system without a SERIOUS math degree and in depth study of the subject.
Whydening Gyre
5 / 5 (1) May 20, 2014
Cryptography is nothing for that layman, Don't fool yourselves into thinking you could devise a secure system without a SERIOUS math degree and in depth study of the subject.

Amen to that one, brother.
TAz00
2 / 5 (3) May 20, 2014
So if the latest and 'best' security algorithm lasted only 2 hours... what are we using today?
rkolter
4.3 / 5 (6) May 20, 2014
when is the cryptography community going to learn..

...
I'm an amateur cryptographer.


Trimmed your message down to the bare-bones. You don't earn the right to say "when will they ever learn" if you call yourself an amateur in the field whose methods you are trying to overthrow.
SoylentGrin
not rated yet May 20, 2014
So if the latest and 'best' security algorithm...


It was a candidate, not the latest in use or best in use. It's not even a candidate anymore.
antonima
1 / 5 (1) May 20, 2014
The article states that researchers 'proved' that 'EPFL computers' could crack this family of algorithms.. what does that even mean? Did they actually crack the encryption? What is the power of 'EPFL computers'?
Urgelt
not rated yet May 20, 2014
"...what does that even mean? Did they actually crack the encryption? What is the power of 'EPFL computers'?"

I suspect (but do not know) that EPFL operates a supercomputer cluster. Not a famously capable one, but there are many such available to cryptology researchers.

The article doesn't make it clear what is precisely meant by 'cracking,' but I can guess at that, too. The researchers demonstrated they can break into an encrypted container using this encryption algorithm in roughly two hours on their supercomputer cluster. Actual time to decrypt any given container will vary around that mean. That would put this particular encryption algorithm in the 'broken' category - unsuitable for any serious encryption purpose.

It does not mean that any average guy with a laptop can break into a container encrypted by this algorithm. But the snoops at NSA would find it a trivial challenge, and so might some criminals.
malapropism
not rated yet May 20, 2014
the only way to ensure a safe encryption is to use the data to be encrypted as the key and have enough proceedurally unparalellizable data transformations so as to make automating and bruteforce impractical ... hide the key as a salt into multiple steps decrypted with a set from the newly encrypted data, where the insertion doesn't affect that set

You're meaning increasing the CPU-cost of cracking the encryption - there are already several encryption methods that do this; some of them allow you to specify the number of encryption iterations making correct decryption more difficult to get to by guesswork. The salt can't be "hidden" at some 'downstream' encryption-position awaiting successful decryption to get to it however because you need the salt to be able to begin the process. And anyway, having the salt isn't the fundamental blocker in these schemes - it's increasing the cost of hacking so much that you encourage bad guys to try their luck elsewhere instead.

alfie_null
5 / 5 (2) May 21, 2014
In view of the whole point of the article, it's a mildly amusing to see comments from a few laypersons, who are so convinced they know how to do it better.
antialias_physorg
5 / 5 (1) May 21, 2014
I suspect (but do not know) that EPFL operates a supercomputer cluster. Not a famously capable one, but there are many such available to cryptology researchers.

Paper states that the used 52240 core hours. EPFL has accees to a BlueGene/Q supercomputer with 16348 cores. (While that particular machine is not on the list BlueGene/Q machines currently reperesent 4 of the top 10 fastest supercomputers)
antialias_physorg
5 / 5 (1) May 21, 2014
computers' could crack this family of algorithms.. what does that even mean? Did they actually crack the encryption?

They showed that they could significantly reduce the complexity of the problem. If I read it correctly (and to be honest: I just browsed it): after their algorithmic improvements they could crack a message encrypted with a 128bit key as easily a message encrypted with a 59bit key without the improvement (which is pretty significant).
persephone1961
not rated yet May 21, 2014
in Johnny Mnemonic (1995) the encryption code was 3 random images. . . https://www.youtu...UwbWDJbg
antialias_physorg
2.3 / 5 (3) May 21, 2014
Which is sort of pointless. Using an encryption algorithm multiple times on a message is not more secure than using it once (which is particularly easy to prove with bitwise addition of a random key).

If your key is even a tiny bit less than totally random (e.g. generated via an algorithm) then multiple passes will make the encryption LESS secure - because the corelations compound.
(This is one of the reasons why using multiple different algorithms in sequence is not a good idea. You're reducing security to the LEAST secure algorithm in your chain)
bluehigh
1 / 5 (2) May 21, 2014
2 hours to crack a container for 1 transaction! Heaps more effective to drop a root kit keystroke recorder through email bait. Or better still .. Get a copy of blackshades.

Seriously, anyone believe a 'man in the middle' has the resources to intercept and decode my $50 PayPal transaction? Just fear mongering nonsense.

Oh, I know ... The NSA or Euro Mafia might be decrypting my online shopping payments.

IamVal
1 / 5 (3) May 21, 2014
antialias. Your understanding of contemporary crypo ideals is very nice, but ultimately stuck.

your stumbling point is in trying to avoid data inflation and repetative co-relation.
both are solutions when the key determines the steps of the algorythm. safe transferal is always going to be a problem, and untill quantum computing actually rears it's ugly head for the masses, memorizable passkeys are the way to go. Since they don't generate enough data to secure anything, you then use the data to encrypt as the secondary key. Hide the memorized key as a non-random, data cyphered, salt in the first few steps for veracity checking, and run that through your system a few times. The complexity of figuring out which algorythm will be used with each byte adds to the keysize exponentially.

then, When compiling, have a set of pre-processor directives that change the order of the non-paralelizable processies and you've got as secure of an algorythm you'll get.
IamVal
1 / 5 (3) May 21, 2014
that siad, you're far too entrenched in the singular equations simpicity.
you want a system that's mathmatically proveable, when in reality doing the studying and fully understanding (and publishing) the statistics about your routine is allowing access to it's weaknesses. yours is contingent on simple but highly difficult problems.

where my ideal is a very difficult problem to understand that can be demonstrated in practice.
compounding a routine only adds coreativity if the reoutine is static and not determined by the data to be transformed. Sure, you might be able to pick up all of my individual transformations over time, but you can't use them practically to solve any other cyphertext.

making the routine obscenely long for the end user breaks brute forcing. pgp requires less than .01 second to process a single decypher with the key. My routines take 30 seconds to 2 minutes for the same.thats when you know the key, and have the proper algo-switch.
IamVal
1 / 5 (3) May 21, 2014
I should also probably addend that my ideals are not masses-friendly. This is a way to avoid targetted decryption for decades to come, where being able to avoid untargetted decryption is probably enough. the code has to be sent and compiled by every user and the proper preprocessor tags have to be set for one to work against another..
truely, my ideals are supposed to protect against all adaptive brute forcing techniques we're likely to face. even if they could decode one, it wouldn't nessesarily get you any closer to understanding how the process works; you can't adapt to another one
it's too much data with too many branches and no obvious patterning 30,000 loops in.
antialias_physorg
5 / 5 (1) May 21, 2014
your stumbling point is in trying to avoid data inflation and repetative co-relation.

I come from an information theory background. Data inflation does not increase information (you gain nothing in terms of secrecy/encryption strength and only bloat your data heap).
That's not some conjecture. That's just simple math. You can't cheat information theory or statistical analysis that way.
making the routine obscenely long for the end user breaks brute forcing.

No it does not if your routine is weak (as for example the family of algorithms which were shown to be weak in the article). The difficulty in breaking it does not increase linearly with the size of the key (as noted in the paper which you can access via the link at the end of the article). This is true of many algorithms. Make the key longer and sooner or later you gain no secrecy but just bloat your message (making it impractical to crypt/decrypt in a sensible amount of time).
Rupyro
1 / 5 (3) May 22, 2014
Solution: Increase the severity of punishment for cyber crimes. The problem with the internet is the perpetrators don't see the consequences of stealing 10,000 identities, destroying hundreds if not thousands of lives for a quick buck. I understand that some are just kids just do it for fun so I think a 2 strike rule would be fair. At the end of the day how are they any different than our friends on wall street...
antialias_physorg
5 / 5 (2) May 22, 2014
Increase the severity of punishment for cyber crimes.

Severity of punishment doesn't work as a deterrent, because there is one thing common to all criminals: they don't expect to get caught (otherwise they wouldn't do it).

If you don't expect to get caught you you don't care how high the punishment is (and most criminals are unaware of the extent of possible punishment before getting caught in the first place - so the severity of punishment doesn't affect their actions in more than one way)
dedereu
not rated yet May 22, 2014
"The danger lies in the fact that these systems are based on principles that we do not fully understand" is a fundamental remark, which is related to chaos and behind to quantum machanic, the basis of our world, which allows coherently superposed parallel explorations of all the calculations at the same time, so that clasical keys are no more secure.

russell_russell
1 / 5 (2) May 22, 2014
If you want to "do random" use radioactive decay signals or similar. - AP

Help me understand this. What is random about radioactive decay signals? Leaving aside the cryptography usefulness of "random" how can the Zeno Quantum effect rendered predictability?
someone11235813
5 / 5 (1) May 22, 2014
Without cryptography, no one would dare to type their credit card number on the Internet.


Correction, without cryptography banks would not issue credit cards in the first place seeing as it's their responsibility if their encryption is cracked and money stolen.
someone11235813
not rated yet May 22, 2014
"We proved that it would only take two hours for EPFL computers to solve a problem of this kind. Whereas it was believed that it would take 40'000 times the age of the universe for all computers on the planet to do it!"


Still it's within their error margins.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more

Click here to reset your password.
Sign in to get notified via email when new comments are made.