Also - should we not be switching over to these algorithms starting like... now? Am I wrong that anyone collecting https traffic now will be able to break it in the future?
Signal has a post about using pre and post-quantum together: https://signal.org/blog/pqxdh/
> The essence of our protocol upgrade from X3DH to PQXDH is to compute a shared secret, data known only to the parties involved in a private communication session, using both the elliptic curve key agreement protocol X25519 and the post-quantum key encapsulation mechanism CRYSTALS-Kyber. We then combine these two shared secrets together so that any attacker must break both X25519 and CRYSTALS-Kyber to compute the same shared secret.
The pattern typically used for this is that the key for the high-speed symmetric encryption is split into multiple parts, each of which is encrypted with a separate public key system. One classical, one (or two, now?) with a post-quantum algorithm. As long as each part of the key is big enough, this still means you'd need to crack all the separate public key algorithms, but doesn't introduce any of the layering weaknesses.
In the early days of SSL there were cross-protocol information leaks if you used the same key or related keys for different protocols or protocol versions. In the DROWN attack, I can get some ciphertext from you in TLS, then feed related ciphertexts back to you in SSLv2 (an ancient version) if you're using the same key for both and have both enabled. With enough tries - a practical number of tries, not 2^64 - I can find the decryption of that ciphertext, and then I can calculate the key for the TLS session I intercepted.
Well, I can't because I'm not a leading cryptographer, but some people can.
Here's Wikipedia: https://en.wikipedia.org/wiki/Multiple_encryption
I'm no expert here, but if I understand Wikipedia correctly:
* Be sure to use distinct keys and IVs for each individual layer.
* Be aware that encrypting ciphertext could lead to a known-plaintext attack on the outer cipher, if the inner ciphertext starts in a standard way (file header etc.)
In fact it’s common practice in high security government use cases to mandate two layers built by two different vendors for defense in depth. That way a nasty bug in one doesn’t compromise anything, and the odds of a nasty exploit in both at once are quite low.
You might be thinking of bespoke combinations of algorithms at the cryptographic construction level where the programmer is trying to be clever. Being clever with crypto is dangerous unless you really know what you are doing.
I do not say this lightly, having followed the academic side of QC for more than a decade.
There's also a good chance that the initial compromises of the classical algorithms won't be made public, at least initially. There are multiple nation-state actors working on the problem in secret, in addition to the academic and commercial entities working on it more publicly.
Various US standards require encryption algorithms to be considered safe for the next 30 years.
Sufficiently big quantum computers are likely in the next 30 years, but it's not urgent in any other meaning of the word.
Imagine if next year, via magic wand, all the current TLS systems were completely and totally broken such that the whole of the internet using TLS became effectively unencrypted in any way? How much damage would that do to the ecosystem? But we also just invented a new protocol that works, so how long would it take to deploy it to just 50%? or to 80%? And how long would it take to replace the long tail?
I'll also leave record now decrypt later for another commenter.
All the big important systems are again and again vulnerable to these attacks (Cisco, M$, fortinet, etc.) - but of course those aren’t “sexy” problems to research and resolve, so we get the same stuff over and over again while everyone is gushing to protect against some science fiction crypto attacks that are and have been for the last 30 years complete fantasy. It’s all a bit tiring to be honest.
Your argument is akin to,
> The problem is that a lot of physicians concentrate on diabetes, or hypertension, when there's people who have been stabed, or shot. Constantly hearing about how heart disease is a big problem is tiring to be honest.
Also, I'm not sure what circles you run in, but if you had to ask any of my security friends if they wanted to spend time on a buffer overflow, or xss injection, or upgrading crypto primitives for quantum resistance... not a single one would pick quantum resistance.
> The problem is more that people concentrate a lot of energy on hypothetical future quantum attacks when the actual threats have been the same since the 00s
Just so I can be sure... you meant having the qbits to deploy such an attack, right? Because really the only thing stopping some of the quantum computing based attacks is number of stable qbits. They're not hypothetical attacks, they've been shown to work.
I commend your friends but many people in these HN threads seem to be ready to implement post-quantum encryption right now to protect against some future threats.
> you meant having the qbits to deploy such an attack, right
Yes - last time I checked it was like 3 stable qbits. It’s just so far off from being a reality i really can’t take that research seriously. I feel like a lot of resources are wasted in this kind of research when we are still dealing with very basic problems that aren’t just as sexy to tackle.
Edit: heart disease is a real thing so your analogy is lacking - there have been 0 security risks because of quantum in the real world. It’s more like “physicians concentrating on possible alien diseases from when we colonise the universe in the future while ignoring heart disease”
The problem is, this sort of question suffers from a lot of unknown unknowns. How confident are you that we don't see crypto broken by QC in the next 10 years? The next 20? Whatever your confidence, the answer is probably "not confident enough" because the costs of that prediction being wrong are incalculable for a lot of applications.
I'd say I'm 99% confident we will not see QC break any crypto considered secure now in the next 20 years. But I'll also say that the remaining 1% is more than enough risk that I think governments and industry should be taking major steps to address that risk.
> NIST selects HQC as fifth algorithm for post-quantum encryption
The other 3 are digital signature algorithms, not encryption.
Solving Ax=b is like week 2 of undergraduate linear algebra. Solving Ax+e=b, in a lattice setting, is Hard in the same sense factoring is: as you run the steps of elimination to attempt to solve it, the errors compound catastrophically.
What you described would be closer to learning with errors (https://en.m.wikipedia.org/wiki/Learning_with_errors) combined with SIS/SVP. Learning with errors is based on the parity learning problem (https://en.m.wikipedia.org/wiki/Parity_learning) in machine learning, which I take as a positive sign of security.
Interestingly you can get information theoretically secure constructions using lattices by tuning the parameters. For example, if you make the `A` matrix large enough it becomes statistically unlikely that more than 1 solution exists (e.g. generate a random binary vector `x` and it's unlikely that an `x'` exists that solves for `b`).
I found https://er4hn.info/blog/2023.12.16-sphincs_plus-step-by-step... to be a nice introduction.
I found AI (combo Grok & ChatPPT 4o) to be the best resource for this. It was able to break it down to digestible chunks, then pull it together that made sense. It even made suggestions what math areas I need to brush up on.
I think Lattice-based ones will eventually be broken by a quantum algorithm. I am fully on board with lamport signatures and SPHINCS+
For signing, I treat Dilithium (lattice-based) as "standard security" and SPHINCS+ (hash-based) as "high security". In particular, the former is used for end user public keys and certificates, while the latter is used for code signing where the larger public key and signature sizes are less of an issue.
In all cases, I wouldn't use PQC without combining it with classical crypto, in Cyph's case X25519/Ed25519. Otherwise you run the risk of a situation like SIDH/SIKE where future cryptanalysis of a particular PQC algorithm finds that it isn't even classically secure, and then you're hosed in the present.
With hash-based signatures, hybridization isn't required. They are the most powerful signature scheme approach by far. The security assumption hash-based signatures rely on is also shared with every other signature scheme (hashes are what are signed). Other schemes come with additional assumptions.
It's unfortunate that hash-based public key exchange is not practical. Merkle Puzzles require 2^n data exchange for 2^2n security.
Having said that, while SPHINCS+ seems highly likely to be safe (particularly as far as PQC goes), it isn't impossible that someone finds a flaw in e.g. the construction used to implement statelessness. It's probably fine on its own, and stacking it with something like RSA is maybe more trouble than it's worth, but there's also very little downside to hybridizing with Ed25519 given its overhead relative to SPHINCS+; 64 bytes on top of a ~30 KB signature is practically a rounding error.
(Also, small correction to my last comment: only the SPHINCS+ signatures are large, not the public keys.)
Great talk by Peter Gutman why this whole quantum topic is bollocks: https://www.cs.auckland.ac.nz/~pgut001/pubs/bollocks.pdf
Cryptographically relavent Quantum computers are definitely not happening in the near term, but 25 years is a long enough time horizon that it is plausible.
Just consider what tech was like 25 years ago. Would anyone (without the benefit of hindsight) in 1999 really be able to predict modern AI, the ubiquity of smart phones, etc. Heck 25 years ago people still thought the internet thing was a fad. Anyone trying to predict 25 years out is full of crap.
NIST is an untrustworthy government agency that occasionally produces useful encryption standards. The answer to "should we use a NIST standard" is to look at what the wider academic cryptography community is talking about. Dual_EC_DRBG was complained about immediately (for various strange statistical properties that made it impractical) and people found the ability to hide a backdoor in Dual_EC_DRBG in 2004.
If anything, the biggest issue is that the security experts pointing out the obvious and glaring flaws with NIST standards don't get listened to enough.
[0] A random number generator standard designed specifically with a back door that only the creator of its curve constants could make use of or even prove had been inserted. It was pushed by NIST during the Bush Jr. administration.
> In September 2013, both The Guardian and The New York Times reported that NIST allowed the National Security Agency (NSA) to insert a cryptographically secure pseudorandom number generator called Dual EC DRBG into NIST standard SP 800-90 that had a kleptographic backdoor that the NSA can use to covertly predict the future outputs of this pseudorandom number generator. [...] the NSA worked covertly to get its own version of SP 800-90 approved for worldwide use in 2006. The whistle-blowing document states that "eventually, NSA became the sole editor".
https://en.wikipedia.org/wiki/National_Institute_of_Standard...
In NIST's position one could analyze the submissions for vulnerabilities to closely held (non-public) attacks, then select submissions having those vulnerabilities.
1. Pretend to be someone else and enter a backdoored algorithm. Or pressure someone to enter a backdoored algorithm for you. Or just give them the algorithm for the reward of being the winner.
2. Be NIST, and choose that algorithm.
This is the problem with all these modern NIST contest theories. They're not even movie plots. Your last bit, about them paying someone like Peikert off, isn't even coherent; they could do that with or without the contest.
Then why does the contest give you any more confidence that the selection isn't backdoored?
it's only the next natural step
I’d be more concerned with whether NIST colludes with the NSA to approve algorithms they could crack.
https://en.wikipedia.org/wiki/Dual_EC_DRBG
It's not at all impossible to put a backdoor in a protocol which requires knowledge of a key in order to exploit. This isn't even the only example where this is thought to have occured.
If you introduce a deliberate weakness to your encryption, the overall security is reduced to the security level of that weakness.
Relying on NOBUS ("nobody but us") is hubris (see shadow brokers, snowden, etc.).
There's no reason to think it would have remained a "NOBUS" backdoor forever. Especially if it was more widely used (i.e. higher value), and/or used for longer.
>Using this logic, you would say that no encryption method is possibly secure
I mean, to an extent that a little waterboarding will beat any encryption method, yes I would say that.
But, for 99.99% of people, your data isn't worth the waterboarding. On the flipside, a backdoor to, say, all TLS communication, would be very worth waterboarding people.
I wonder what other countries do? Do their agencies trust NIST or they recommend their own and run their programs for algorithms. I am thinking of say Germany, France, Britain etc.
https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisati...
https://cyber.gouv.fr/sites/default/files/document/pqc-trans...
https://www.ncsc.gov.uk/whitepaper/next-steps-preparing-for-...
> The NCSC recommends ML-KEM-768 and ML-DSA-65 as providing appropriate levels of security and efficiency for most use cases.
e: and yes, i am aware of the history around nist and crypto
That's the thing about politics... they touch everything. There's a popular youtuber that I like, he's got a funny saying "You might not fuck with politics, but politics will fuck with you!" Fits well here.
You might wanna ignore politics when talking about something that should be pure math, but now that we're talking about why crypto is going to be the standards that all commercial software must support. Suddenly we now need to consider how confident we are in something. And really, that's all crypto boils down to is confidence in the difficulty of some maths. Was this recommended (soon mandated) with more or less care then the other options? How would we be able to tell. Is NIST likely to remake their previous unethical mistakes?
No. They don't. The level at which politics has actually intersected with my life in the past year is zero. I suspect the same is true for the majority of people in the US.
Your politics are mostly a fashion choice. You don't need to put them on display in literally ever conversation. You also cannot possibly change the world around you with this behavior so I can't understand why so many people feel the need to engage in it.
> Suddenly we now need to consider
The government is massive. You always need to consider this. Pretending that a choice of a single federal official is the difference maker here takes bizarre fashion choices into the completely absurd. The only thing you're doing is alienating half the audience with churlish behavior.
Road maintenance, sewer connections, water and air quality, food safety, and a million other things that you interact with daily are all results of various levels of politics.
Given your obvious disgust with someone else thinking or talking about it. (You weren't tagged, you decided to invite yourself into the conversation to proclaim that someone else is wrong for understanding the world differently from you) It's not much of a shock you can't understand why.
> you also cannot possibly change the world around you with this behavior
This feels like the thing you're actually mad about. Complaining at other people for talking about politics (instead of ignoring them) will have even less of an effect. If you want to have an impact ask more questions, don't berate people for not being as smart as you already are, or for daring to see things differently from the way you see them.
As for the change in the world I want to see. First, I want people to be nicer to each other. This us vs them thing needs to stop. Second, I don't need to change the world, I'm happy to just improve my little corner of it a bit. Security (and crypto) is my corner; and NIST has made some mistakes that I find problematic. The idea that leadership of any org *does* influence an org, isn't normally controversial, so if the leadership changes. It's good to know the trust level is gonna change. If the leadership changes in a less trustworthy direction. I'd hope more people learn about it, so blind trust in NIST drops. I would call that a small improvement.
> The government is massive. You always need to consider this. Pretending that a choice of a single federal official is the difference maker here takes bizarre fashion choices into the completely absurd. The only thing you're doing is alienating half the audience with churlish behavior.
I mean, the people replying to you while you rage at them must care a bit about the politics of the system. So I can't imagine that calling something they care about a "fashion choice", and then insulting them wouldn't feel alienating. Is this a do as I say, not as I do kinda thing?
That's not the argument being made, you're using that as a strawman to distract from the actual position, which is that indiscriminate layoffs (which is what DOGE is doing) reduce institutional competence and increase the likelihood that whatever scheme is selected is not fit for purpose. Address that argument, not the one you've invented in your head.
> reduce institutional competence and increase the likelihood
most replies interpreted it the same way I did, likely due to the reference to 'loyalties' & 'trust'.
[0]: https://news.ycombinator.com/item?id=43333834 [1]: https://news.ycombinator.com/item?id=43333698 [2]: https://news.ycombinator.com/item?id=43333643