Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> In many cases, stacking encryption algorithms actually weakens the resulting security.

Could you, please, explain how?



Imagine you have a couple of algorithms that scramble a solved Rubik's cube into a configuration that takes at least 20 twists to unscramble [0]. From there, any attempt to make it ‘even more scrambled’ would be pointless — and actually likely make solving the resulting puzzle easier.

Now imagine there's a programmer who wants to make the ultimate cube scrambler despite not knowing any of the above. Their brilliant idea is to take the aforementioned algorithms and chain them together. (Result: snafu.)

In essence, the moral of the story is that one shouldn't try stacking encryption algorithms without first acquiring a pretty good understanding of how they all work.

[0] https://www.popsci.com/science/article/2010-08/gods-number-r...


I think it depends. Imagine a future where quantum computers may be in reach by intelligence agencies, but a quantum-resistant public key encryption algorithm has been proposed but not rigorously defended. You wouldn't want to trust either algorithm alone, so you can use both: encrypt the data with the quantum algorithm first, then by the classical one. Decrypting would require breaking both, there's no shortcuts.


That’s not how it works unless you’re sharing a key between them somehow and one of them reveals the key. Otherwise an attacker could take something encrypted with a good algorithm and encrypt the cipher text with a bad algorithm to make it easier to crack themselves.


I gave an intuition for how it can happen that combining algorithms (in a bad way) results in weaker encryption — without claiming that it must always happen.

If we move the goalposts to where the combined algorithm receives a much larger key than any of the individual parts we're comparing to in terms of crackability, then the likely failure mode isn't ‘weaker’ any more, but ‘stronger, though maybe not as much stronger as was intended’.

The history of triple DES provides a nice practical example: ‘double DES’ isn't a thing because encrypting already-DES-encrypted data with DES again, with a completely separate key (thus effectively doubling the size of the key), does almost nothing to improve security.


To support your point. I've used these weaknesses to break crypto algorithms in the past.

A typical example is the crapto-1 Mifare Classic algorithm used to encrypt NFC cards. The way they read from the shift register and combine the bits was dumb and it's complexity weakened the algorithm.

Another I've seen is using two sequential keys XORd against one another to produce and "encryption" key. Turns out reading from low entropic systems very quickly yields a similar enough key that when XORd, partially removes the first one.


Can you give an example (or at least a sketch) of how, e.g. AES-128 on top of, or below, DES, is weaker than either?

The claims of "weakened by combining" are often aired, but all examples I've found so far are basically summarized as "remaining within a group structure" (as in your rubik's cube example whose god number is 20) - which might not be stronger than each individual, but is unlikely to be weaker either -- and algorithm combinations usually DON'T remain within a group structure (e.g. DES is not a group, so 3DES is strictly stronger than DES, even if it might not be 3-times strong in bits)


Yes, the problem is that you are confusing key re-use with layering algorithms. Key re-use is bad, layering algorithms is not.

Without key re-use, there is no way adding another algorithm can weaken an existing encryption.


If I’m following this logic correctly - running a few more algorithms on something before trying to decrypt will make it easier rather than harder to decrypt?

Is the ceiling for “max encryption” that low, or is just that one algorithm combined with another has a local maximum?


I just cherry-picked a simple example to make a clear illustration, but …

> running a few more algorithms on something before trying to decrypt will make it easier rather than harder to decrypt?

No, it could make it easier, harder, or about the same. The ‘harder’ case is just unlikely when the algorithms one started with were already state-of-the-art and the programmer didn't know what they were doing. It might seem tempting to think that a cryptanalyst now has to do twice the work, but what they're really doing isn't cracking multiple encryptions — they're just attacking a different encryption.


Thanks!


Very basic example: ROT13 is a form of encryption. Applying ROT13 twice gives you plaintext.

It's of course not that trivial with better encryption algorithms. But before stacking encryption algorithms, try to first answer what you are trying to achieve (that application of a single algorithm does not).


It's more like doing ROT13 and ROT12 to strengthen the encryption. To decrypt you just need to do ROT1, not ROT-12 followed by ROT-13.


There are ways where it can, but it is usually more secure to stack encryption despite what people on HN tell you. The NSA does stacks double for the secure version of the mobile phone that they give to high level diplomats and POTUS and there are cases like during Cloud Bleed where the only sites that were fully secure turned out to be the ones that used client side encryption in addition to HTTPS. I'm not saying that this would necessarily be more secure, just that is tends to be more secure based on all of the research I've done and personal experience on projects as well as conversations with people that actually break encryption for a living. The devil is in the details, though, and it also depends on the nature of your adversary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: