# Researchers crack unassailable encryption algorithm in two hours

(Phys.org) —A protocol based on "discrete logarithms", deemed as one of the candidates for the Internet's future security systems, was decrypted by EPFL researchers. Allegedly tamper-proof, it could only stand up to the school machines' decryption attempts for two hours.

Without cryptography, no one would dare to type their credit card number on the Internet. Security systems developed to protect the communication privacy between the seller and the buyer are the prime targets for hackers of all kinds, hence making it necessary for encryption algorithms to be regularly strengthened. In universities, research in hacking aim mostly at testing their robustness.

Most of them rely on "discrete logarithm problems" – very complex mathematical operations – to secure data transmissions. "There are several variants, based for example on prime numbers or other mathematical formulas, explains Arjen Lenstra, director of the Laboratory for Cryptologic Algorithms (LACAL) at EPFL . "Their complexity is such that they are deemed as impossible to solve." Due to their effectiveness, the industry, especially banks, makes use of them for encoding their transactions. "The danger lies in the fact that these systems are based on principles that we do not fully understand, states Arjen Lenstra. If someone were to find out how to solve them all, the entire system would collapse."

In fact, a little over a year ago, cracks began to appear. Robert Granger, then at University College Dublin but who has since joined LACAL, showed that surprisingly the problem's first phase can be solved very easily. Antoine Joux, a French Scientist, independently had similar insights, and together with a French team then showed how to make the second and final phase `very nearly' easy. Then, many other cryptographers stepped in.

However, these methods only managed to overcome a very particular type of discrete logarithm. Nothing suggested that they could crack the industrial variants.

Therefore, an EPFL team, together with Jens Zumbragel from TU Dresden, focused on a "family" of algorithms presented as candidates for the next generation of encryption keys, which made use of "supersingular curves. "We proved that it would only take two hours for EPFL computers to solve a problem of this kind. Whereas it was believed that it would take 40'000 times the age of the universe for all computers on the planet to do it!" explains Thorsten Kleinjung, post-doctoral fellow at LACAL.

However, we need not panic; this system has not yet been deployed. EPFL researchers' success is not likely to be misused. "Basically, we just excluded this option from the search for a successor to current algorithms," said Arjen Lenstra, whose team will present its findings this August at the Crypto 2014 conference, which announced today that it accepted their submission to participate.

Explore further

**More information:**Robert Granger, Thorsten Kleinjung, Jens Zumbrägel. "Breaking `128-bit Secure' Supersingular Binary Curves (or how to solve discrete logarithms in F

_{2}4⋅1223 and F

_{2}12⋅367)." arXiv:1402.3668. arxiv.org/abs/1402.3668

**Citation**: Researchers crack unassailable encryption algorithm in two hours (2014, May 20) retrieved 17 September 2019 from https://phys.org/news/2014-05-unassailable-encryption-algorithm-hours.html

## User comments

ExpiorerIamValthe only way to ensure a safe encryption is to use the data to be encrypted as the key and have enough proceedurally unparalellizable data transformations so as to make automating and bruteforce impractical, of course redundant loops of these sufficiently complex processies is the next step. hide the key as a salt into multiple steps decrypted with a set from the newly encrypted data, where the insertion doesn't affect that set..

that is to say, the key has to be in the complexity of the steps required to transform it, not in a physical set of text. where it takes minutes or hours to decrypt on a home computer when you know the key which has to be simple and memorizeable.

I'm an amateur cryptographer. the problems with this system is of-course exponential inflation. until recently, storing more data than necessary, and wasting flops on cryptography was an insane business practice.

antialias_physorgThat sounds stupid: because your key is then the ultimate crib.

That's the point of all cryptoalgorithms - and was even the point of those based on the supersingular curves

Using the same cryptographic method multiple trimes weakens the result - not strengthens it.

The only really (mathemtaically) safe method is a one time pad. For that you need safe transmission of the key - and that is exactly how quantum cryptography works. It does not require exponential inflation of data (merely a doubling of the data you want to send).

robert_platterWhydening Gyreantialias_physorgThat's called "security by obscurity" (which is the worst possible way to do safe data transfer)

If you store it in the way you suggest you are also taking the steganography out of steganography (i.e. you're storing the data in plain sight...so you now have reduced it from "the worst possible way to hide data" to "not hiding data at all")

Photos are anything but random. There is huge average correlation between neighboring pixels. Not least because of the mechanism by which the photo is taken (CCD chips)

If you want to "do random" use radioactive decay signals or similar.

Cryptography is nothing for that layman, Don't fool yourselves into thinking you could devise a secure system without a SERIOUS math degree and in depth study of the subject.

Whydening GyreAmen to that one, brother.

TAz00rkolter...

Trimmed your message down to the bare-bones. You don't earn the right to say "when will they ever learn" if you call yourself an amateur in the field whose methods you are trying to overthrow.

SoylentGrinIt was a candidate, not the latest in use or best in use. It's not even a candidate anymore.

antonimaUrgeltI suspect (but do not know) that EPFL operates a supercomputer cluster. Not a famously capable one, but there are many such available to cryptology researchers.

The article doesn't make it clear what is precisely meant by 'cracking,' but I can guess at that, too. The researchers demonstrated they can break into an encrypted container using this encryption algorithm in roughly two hours on their supercomputer cluster. Actual time to decrypt any given container will vary around that mean. That would put this particular encryption algorithm in the 'broken' category - unsuitable for any serious encryption purpose.

It does not mean that any average guy with a laptop can break into a container encrypted by this algorithm. But the snoops at NSA would find it a trivial challenge, and so might some criminals.

malapropismYou're meaning increasing the CPU-cost of cracking the encryption - there are already several encryption methods that do this; some of them allow you to specify the number of encryption iterations making correct decryption more difficult to get to by guesswork. The salt can't be "hidden" at some 'downstream' encryption-position awaiting successful decryption to get to it however because you need the salt to be able to begin the process. And anyway, having the salt isn't the fundamental blocker in these schemes - it's increasing the cost of hacking so much that you encourage bad guys to try their luck elsewhere instead.

alfie_nullantialias_physorgPaper states that the used 52240 core hours. EPFL has accees to a BlueGene/Q supercomputer with 16348 cores. (While that particular machine is not on the list BlueGene/Q machines currently reperesent 4 of the top 10 fastest supercomputers)

antialias_physorgThey showed that they could significantly reduce the complexity of the problem. If I read it correctly (and to be honest: I just browsed it): after their algorithmic improvements they could crack a message encrypted with a 128bit key as easily a message encrypted with a 59bit key without the improvement (which is pretty significant).

persephone1961antialias_physorgIf your key is even a tiny bit less than totally random (e.g. generated via an algorithm) then multiple passes will make the encryption LESS secure - because the corelations compound.

(This is one of the reasons why using multiple different algorithms in sequence is not a good idea. You're reducing security to the LEAST secure algorithm in your chain)

bluehighSeriously, anyone believe a 'man in the middle' has the resources to intercept and decode my $50 PayPal transaction? Just fear mongering nonsense.

Oh, I know ... The NSA or Euro Mafia might be decrypting my online shopping payments.

IamValyour stumbling point is in trying to avoid data inflation and repetative co-relation.

both are solutions when the key determines the steps of the algorythm. safe transferal is always going to be a problem, and untill quantum computing actually rears it's ugly head for the masses, memorizable passkeys are the way to go. Since they don't generate enough data to secure anything, you then use the data to encrypt as the secondary key. Hide the memorized key as a non-random, data cyphered, salt in the first few steps for veracity checking, and run that through your system a few times. The complexity of figuring out which algorythm will be used with each byte adds to the keysize exponentially.

then, When compiling, have a set of pre-processor directives that change the order of the non-paralelizable processies and you've got as secure of an algorythm you'll get.

IamValyou want a system that's mathmatically proveable, when in reality doing the studying and fully understanding (and publishing) the statistics about your routine is allowing access to it's weaknesses. yours is contingent on simple but highly difficult problems.

where my ideal is a very difficult problem to understand that can be demonstrated in practice.

compounding a routine only adds coreativity if the reoutine is static and not determined by the data to be transformed. Sure, you might be able to pick up all of my individual transformations over time, but you can't use them practically to solve any other cyphertext.

making the routine obscenely long for the end user breaks brute forcing. pgp requires less than .01 second to process a single decypher with the key. My routines take 30 seconds to 2 minutes for the same.thats when you know the key, and have the proper algo-switch.

IamValtruely, my ideals are supposed to protect against all adaptive brute forcing techniques we're likely to face. even if they could decode one, it wouldn't nessesarily get you any closer to understanding how the process works; you can't adapt to another one

it's too much data with too many branches and no obvious patterning 30,000 loops in.

antialias_physorgI come from an information theory background. Data inflation does not increase information (you gain nothing in terms of secrecy/encryption strength and only bloat your data heap).

That's not some conjecture. That's just simple math. You can't cheat information theory or statistical analysis that way.

No it does not if your routine is weak (as for example the family of algorithms which were shown to be weak in the article). The difficulty in breaking it does not increase linearly with the size of the key (as noted in the paper which you can access via the link at the end of the article). This is true of many algorithms. Make the key longer and sooner or later you gain no secrecy but just bloat your message (making it impractical to crypt/decrypt in a sensible amount of time).

Rupyroantialias_physorgSeverity of punishment doesn't work as a deterrent, because there is one thing common to all criminals: they don't expect to get caught (otherwise they wouldn't do it).

If you don't expect to get caught you you don't care how high the punishment is (and most criminals are unaware of the extent of possible punishment before getting caught in the first place - so the severity of punishment doesn't affect their actions in more than one way)

dedereurussell_russellHelp me understand this. What is random about radioactive decay signals? Leaving aside the cryptography usefulness of "random" how can the Zeno Quantum effect rendered predictability?

someone11235813Correction, without cryptography banks would not issue credit cards in the first place seeing as it's their responsibility if their encryption is cracked and money stolen.

someone11235813Still it's within their error margins.

Please sign in to add a comment. Registration is free, and takes less than a minute. Read more