For context, djb has been doing and saying these things since he was a college student:
While a graduate student at the University of California at Berkeley, Bernstein completed the development of an encryption equation (an "algorithm") he calls "Snuffle." Bernstein wishes to publish a) the algorithm (b) a mathematical paper describing and explaining the algorithm and (c) the "source code" for a computer program that incorporates the algorithm. Bernstein also wishes to discuss these items at mathematical conferences, college classrooms and other open public meetings. The Arms Export Control Act and the International Traffic in Arms Regulations (the ITAR regulatory scheme) required Bernstein to submit his ideas about cryptography to the government for review, to register as an arms dealer, and to apply for and obtain from the government a license to publish his ideas. Failure to do so would result in severe civil and criminal penalties. Bernstein believes this is a violation of his First Amendment rights and has sued the government.
After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.
djb has earned my massive respect for how consistent he's been in this regard. I love his belligerence towards authoritarian overreach in this regard. Him, Phil Zimmermann, Richard Stallman, and all are owed great respect for their insistence on their principles which have paid massive dividends to all of us through the freedom and software that has been preserved and become possible through them. I appreciate them immensely and I think we all owe them a debt of gratitude for their sacrifices, because they all paid a heavy price for their advocacy over time.
Massive respect from me as well. Insisting on principles is extremely tiring and demoralizing. Doing the right thing constantly requires some serious sacrifice.
The whole world ignores the principles out of convenience. Principles are thrown out the window at the first sign of adversity. People get rich by corrupting and violating principles. It seems like despite all efforts the corrupting forces win anyway. I have no idea how these people find the willpower to keep fighting literal government agencies.
That was when he had the legal expertise of the EFF to help him make his case. Later he decided to represent himself in court and failed
> This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it....
> Later he decided to represent himself in court and failed
To be more specific, the government broke out their get out of court free card and claimed they weren't threatening to prosecute him even though they created a rule he was intending to violate. It's a dirty trick the government uses when they're afraid you're going to win so they can get the case dismissed without the court making a ruling.
Amongst the numerous reasons why you _don't_ want to rush into implementing new algorithms is even the _reference implementation_ (and most other early implementations) for Kyber/ML-KEM included multiple timing side channel vulnerabilities that allowed for key recovery.[1][2]
djb has been consistent in view for decades that cryptography standards need to consider the foolproofness of implementation so that a minor implementation mistake specific to timing of specific instructions on specific CPU architectures, or specific compiler optimisations, etc doesn't break the implementation. See for example the many problems of NIST P-224/P-256/P-384 ECC curves which djb has been instrumental in fixing through widespread deployment of X25519.[3][4][5]
Given the emphasis on reliability of implementations of an algorith, it's ironic that the Curve 25519-based Ed25519 digital signature standard was itself specified and originally implemented in such a way as to lead to implementation divergence on what a valid and invalid signature actually was. See https://hdevalence.ca/blog/2020-10-04-its-25519am/
Not a criticism, if anything it reinforces DJB's point. But it makes clear that ease of (proper) implementation also needs to cover things like proper canonicalization of relevant security variables and that supporting multiple modes of operation doesn't actually lead to different answers of security questions meant to give the same answer.
This logic does not follow. Your argument seems to be "the implementation has security bugs, so let's not ratify the standard." That's not how standards work though. Ensuring an implementation is secure is part of the certification process. As long as the scheme itself is shown to be provably secure, that is sufficient to ratify a standard.
If anything, standardization encourages more investment, which means more eyeballs to identify and plug those holes.
No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one. This is a property of the algorithm being specified, not just an individual implementation, and we’ve seen it play out over and over again in cryptography.
I’d actually like to see more (non-cryptographic) standards take this into account. Many web standards are so complicated and/or ill-specified that trillion dollar market cap companies have trouble implementing them correctly/consistently. Standards shouldn’t just be thrown over the wall and have any problems blamed on the implementations.
> No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one.
This argument is without merit. ML-KEM/Kyber has already been ratified as the PQC KEM standard by NIST. What you are proposing is that the NIST process was fundamentally flawed. This is a claim that requires serious evidence as backup.
You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"
NIST can adopt and recommend whatever algorithms they might like using whatever criteria they decide they want to use. However, while the amount of expertise and experience on display by NIST in identifying algorithms that are secure or potentially useful is impressive, there is no amount of expertise or experience that guarantees any given implementation is always feasible.
Indeed, this is precisely why elliptic curve algorithms are often not available, in spite of a NIST standard being adopted like 8+ years ago!
I'm having trouble understanding your argument. Elliptic curve algorithms have been the mainstream standard for key establishment for something like 15 years now. The NIST standards for the P-curves are much, much older than 8 years.
DJB has specific (technical and non-conspiratorial) bones to pick with the algorithm. He’s as much an expert in cryptographic implementation flaws and misuse resistance as anybody at NIST. Doesn’t mean he’s right all the time, but blowing him off as if he’s just some crackpot isn’t even correctly appealing to authority.
I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
There are like 3 cryptographers in all of NIST. NIST was a referee in the process. The bones he's picking are with the entire field of cryptography, not just NIST people.
> The bones he's picking are with the entire field of cryptography
Isn't that how you advance a field, though?
It has been a couple hundred years, but we used to think that disease was primarily caused by "bad humors".
Fields can and do advance. I'm not versed enough to say whether his criticisms are legitimate, but this doesn't sound like a problem, but part of the process, to me (and his article is documenting how some bureaucrats/illegitimate interests are blocking that advancement).
The "area adminstrator" being unable or unwilling to do basic math is both worrying, and undermines the idea that the standards that are being produced are worth anything, which is bad for the entire field.
If the standards are chock full of nonsense, then how does that reflect upon the field?
The standards people have problems with weren't run as open processes the way AES, SHA3, and MLKEM were. As for the rest of it: I don't know what to tell you. Sounds like a compelling argument if you think Daniel Bernstein is literally the most competent living cryptographer, or, alternately, if Bernstein and Schneier are the only cryptographers one can name.
In exactly what sense? Who is the "old guard" you're thinking of here? Peter Schwabe got his doctorate 16 years after Bernstein. Peikert got his 10 years after.
> I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
Currently he argues that NSA is likely to be attacking the standards process to do some unspecified nefarious thing in PQ algorithms, and he's appealing to our memories of Dual_EC. That's not tinfoil hat stuff! It's a serious possibility that has happened before (Dual_EC). True, no one knows for a fact that NSA backdoored Dual_EC, but it's very very likely that they did -- why bother with such a slow DRBG if not for this benefit of being able to recover session keys?
NSA wrote Dual EC. A team of (mostly European) academic cryptographers wrote the CRYSTALS constructions. Moreover, the NOBUS mechanism in Dual EC is obvious, and it's not at all clear where you'd do anything like that in Kyber, which goes out of its way not to have the "weird constants" problem that the P-curves (which practitioners generally trust) ended up with.
No it didn't. The problem with Dual EC was published in a rump session at the next CRYPTO after NIST published it. The widespread assumption was that nobody was actually using it, which was enabled by the fact that the important "target" implementations (most importantly RSA BSAFE, which I think a lot of people also assumed wasn't in common use, but I may just be saying that because it's what I myself assumed) were deeply closed-source.
None of this applies to anything else besides Dual EC.
That aside: I don't know what this has to do with anything I just wrote. Did you mean to respond to some other comment?
It's more like "the standard makes it easier to create insecure implementations." Our standards shouldn't just be "sufficient" they should be "robust."
AES is actually a good example of why this doesn’t work in cryptography. Implementing AES without a timing side channel in C is pretty much impossible. Each architecture requires specific and subtle constructions to ensure it executes in constant time. Newer algorithms are designed to not have this problem (DJB was actually the one who popularized this approach).
Okay, I should've said implementing AES in C without a timing sidechannel performantly enough to power TLS for a browser running on a shitty ARMv7 phone is basically impossible. Also if only Thomas Pornin can correctly implement your cipher without assembly, that's not a selling point.
I'm not contesting AES's success or saying it doesn't deserve it. I'm not even saying we should move off it (especially now that even most mobile processors have AES instructions). But nobody would put something like a S-Box in a cipher created today.
If your point is "reference implementations have never been sufficient for real-world implementations", I agree, strongly, but of course that cuts painfully across several of Bernstein's own arguments about the importance of issues in PQ reference implementations.
Part of this, though, is that it's also kind of an incoherent standard to hold reference implementations to. Science proceeds long after the standard is written! The best/safest possible implementation is bound to change.
I don't think it's incoherent. On one extreme you have web standards, where it's now commonplace to not finalize standards until they're implemented in multiple major browser engines. Some web-adjacent IETF standards also work like this (WebTransport over HTTP3 is one I've been implementing recently).
I'm not saying cryptography should necessarily work this way, but it's not an unworkable policy to have multiple projects implement a draft before settling on a standard.
Look at the timeline for performant non-leaking implementations of Weierstrass curves. How long are you going to wait for these things to settle? I feel like there's also a hindsight bias that slips into a lot of this stuff.
Certainly, if you're going to do standards adoption by open competition the way NIST has done with AES, SHA3, and MLKEM, you're not going to be able to factor multiple major implementations into your process.
This isn’t black and white. There’s a medium between:
* Wait for 10 years of cryptanalysis (specific to the final algorithm) before using anything, which probably will be relatively meager because nobody is using it
* Expect the standardization process itself to produce a blessed artifact, to be set on fire as a false god if it turns out to be imperfect (or more realistically, just cause everybody a bunch of pain for 20 years)
Nothing would stop NIST from adding a post-competition phase where Google, Microsoft, Amazon, whoever the hell is maintaining OpenSSL, and maybe Mozilla implement the algorithm in their respective libraries and kick the tires. Maybe it’s pointless and everything we’d expect to get from cryptographers observing that process for a few months to a year has already been suitably covered, and DJB is just being prissy. I don’t know enough about cryptanalysis to know.
But I do feel very confident that many of the IETF standards I’ve been on the receiving end of could have used a non-reference implementation phase to find practical, you-could-technically-do-it-right-but-you-won’t issues that showed up within the first 6 months of people trying to use the damn thing.
If by that you mean "perfect the implementation", we already get that! The MLKEM in Go is not the MLKEM in OpenSSL is not the MLKEM in AWS-LC.
If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable. It's the publication of the standard itself that is the forcing function for high-quality competing implementations. In particular, part of arriving at high-quality implementations is running them in production, which is something you can't do without solving the coordination problem of getting everyone onto the same standard.
Here it's important to note that nothing we've learned since Kyber was chosen has materially weakened the construction itself. We've had in fact 3 years now of sustained (urgent, in fact) implementation and deployment (after almost 30 years of cryptologic work on lattices). What would have been different had Kyber been a speculative or proposed standard, other than it getting far less attention and deployment?
("Prissy" is not the word I personally would choose here.)
I mean have a bunch of competent teams that (importantly) didn’t design the algorithm read the final draft and write their versions of it. Then they and others can perform practical analysis on each (empirically look for timing side channels on x86 and ARM, fuzz them, etc.).
> If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable.
The forcing function can potentially be: this final draft is the heir apparent. If nothing serious comes up in the next 6 months, it will be summarily finalized.
It’s possible this won’t get any of the implementers off their ass on a reasonable timeframe - this happens with web standards all the time. It’s also possible that this is very unlikely to uncover anything not already uncovered. Like I said, I’m not totally convinced that in this specific field it makes sense. But your arguments against it are fully general against this kind of phased process at all, and I think it has empirically improved recent W3C and IETF standards (including QUIC and HTTP2/3) a lot compared to the previous method.
Again: that has now happened. What have we learned from it that we needed to know 3 years ago when NIST chose Kyber? That's an important question, because this is a whole giant thread about Bernstein's allegation that the IETF is in the pocket of the NSA (see "part 4" of this series for that charming claim).
Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers. All of them had the knowhow and incentive to write implementations of their constructions and, if it was going to showcase some glaring problem, of their competitors. What makes you think that we lacked implementation understanding during this process?
I don’t think IETF is in the pocket of the NSA. I really wish the US government hadn’t hassled Bernstein so much when he was a grad student, it would make his stuff way more focused on technical details and readable without rolling your eyes.
> Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers.
That’s actually my point! When you’re trying to figure out if your standard is difficult to implement correctly, that everyone who worked on the reference implementations is a genius who understands it perfectly is a disadvantage for finding certain problems. It’s classic expert blindness, like you see with C++ where the people working on the standard understand the language so completely they can’t even conceive of what will happen when it’s in the hands of someone that doesn’t sleep with the C++ standard under their pillow.
Like, would anyone who developed ECC algorithms have forgotten to check for invalid curve points when writing an implementation? Meanwhile among mere mortals that’s happened over and over again.
I don't think this has much of anything to do with Bernstein's qualms with the US government. For all his concerns about NIST process, he himself had his name on a NIST PQC candidate. Moreover, he's gotten into similar spats elsewhere. This isn't even the first time he's gotten into a heap of shit at IETF/IRTF. This springs to mind:
This wasn't about NSA or the USG! Note the date. Of course, had this happened in 2025, we'd all know about it, because he'd have blogged it.
But I want to circle back to the point I just made: you've said that we'd all be better off if there was a burning-in period for implementors before standards were ratified. We've definitely burnt in MLKEM now! What would we have done differently knowing what we now know?
> What would we have done differently knowing what we now know?
With the MLKEM standard? Probably nothing, Bernstein would have done less rambling in these blog posts if he was aware of something specifically wrong with one of the implementations. My key point here was that establishing an implementation phase during standardization is not an incoherent or categorically unjustifiable idea, whether it makes sense for massive cryptographic development efforts or not. I will note that something not getting caught by a potential process change is a datapoint that it’s not needed, but isn’t dispositive.
I do think there is some baby in the Bernstein bathwater that is this blog post series though. His strongest specific point in these posts was that the TLS working group adding a cipher suite with a MLKEM-only key exchange this early is an own goal (but that’s of course not the fault of the MLKEM standard itself). That’s an obvious footgun, and I’ll miss the days when you could enable all the standard TLS 1.3 cipher suites and not stress about it. The arguments to keep it in are legitimately not good, but in the area director’s defense we’re all guilty of motivated reasoning when you’re talking to someone who will inevitably accuse you of colluding with the NSA to bring about 1984.
In what way is adding an MLKEM-only code point an "own goal"? Exercise for the reader: find the place where Bernstein proposed we have hybrid RSA/ECDH ciphersuites.
> See for example the many problems of NIST P-224/P-256/P-384 ECC curves
What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?
As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.
On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.
This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.
I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.
In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.
Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.
> As I understand it, a big issue is that they are really hard to implement correctly.
Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.
I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.
I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.
That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".
It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.
Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.
With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that
It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.
When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?
yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels
— which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.
The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc
Aren't the timing attacks you're talking about specific to oddball parameters for the handshake? If you're doing Dragonfly with Brainpool curves you're specifically not doing what NSA wants you to do. Brainpool curves are literally a rejection of NIST's curves.
If you haven't read the Enigma paper, you should do so before confidently stating that nobody's done "sanity checks" on the P-curves. Its authors are approximately as authoritative on the subject as Aaronson is on his. I am specifically not talking about the question of NSA's recommendation on ECC vs. PQ; I'm talking about the integrity of the P-curve selection, in particular. You need to read the paper to see the argument I'm making; it's not in the abstract.
Ah now I see what the question was as it seemed like a non sequitur. I misunderstood the comment by foxboron to be concerns about any backdoors not that P256 is backdoored, I hold no such view of that, surely bitcoin should be good evidence.
Instead I was stating that weaknesses in cryptography have been historically put there with some NSA involvement at times.
For DB: The brain pool curves do have a worse leak, but as stated in the dragon blood paper “we believe that these sidechannels are inherent to Dragonfly”. The first attack submission did hit P-256 setups before the minimal iteration count was increased and afterward was more applicable to same-system cache/ micro architectural bugs. These attacks were more generally correctly mitigated when H2C deterministic algorithms rolled out. There’s many bad choices that were selected of course to make the PAKE more exploitable, putting the client MAC in the pre commits, having that downgrade, including brain pool curves. but to my point on committees— cryptographers warned strongly when standardizing that this could be an attack and no course correction was taken.
The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.
That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.
> And since then we have an NSA employee co-authoring the paper which led to Heartbleed
I'm confused as to what "the paper which led to Heartbleed" means. A paper proposing/describing the heartbeat extension? A paper proposing its implementation in OpenSSL? A paper describing the bug/exploit? Something else?
And in addition to that, is there any connection between that author and the people who actually wrote the relevant (buggy) OpenSSL code? If the people who wrote the bug were entirely unrelated to the people authoring the paper then it's not clear to me why any blame should be placed on the paper authors.
The original paper which proposed the OpenSSL Heartbeat extension was written by two people, one worked for NSA and one was a student at the time who went on to work for BND, the "German NSA". The paper authors also wrote the extension.
I know this because when it happened, I wanted to know who was responsible for making me patch all my servers, so I dug through the OpenSSL patch stream to find the authors.
I'm asking what the paper has to do with the vulnerability. Can you answer that? Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
> Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
This statement makes it clear to me that you don't understand a thing I've said, and that you don't have the necessary background knowledge of Heartbleed, the XZ backdoor, or concepts such a plausible deniability to engage in useful conversation about any of them. Else you would not be so confused.
Please do some reading on all three. And if you want to have a conversation afterwards, feel free to make a comment which demonstrates a deeper understanding of the issues at hand.
Sorry, you're not going to be able to bluster your way through this. What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
> What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
That's a very easy question to answer: the implementation the authors provided alongside it.
If you expect authors of exploits to clearly explain them to you, you are not just ignorant of the details of backdoors like the one in XZ (CMake was never backdoored, a "typo" in a CMake file bootstrapped the exploit in XZ builds), but are naive to an implausible degree about the activities of exploit authors.
If you tell someone you're going to build an exploit and how, the obvious response will be "no, we won't allow you to." So no exploit author does that.
Think the above poster is full of bologna? It's less painful for everyone involved, and the readers, to just say that and get that out of the way rather than trying to surgically draw it out over half a dozen comments. I see you do this often enough that I think you must get some pleasure out of making people squirm. We know you're smart already!
I think their argument is verkakte but I literally don't know what they're talking about or who the NSA stooge they're referring to is, and it's not so much that I want to make them squirm so much as that I want to draw the full argument out.
I think your complaint isn't with me, but with people who hedge when confronted with direct questions. I think if you look at the thread, you'll see I wasn't exactly playing cards close to my chest.
I don't make a habit of googling things for people when they could do it just as quickly themselves. There is only one paper proposing the OpenSSL heartbeat feature. So I have not been unclear, nor can there be any confusion about which it is. Perhaps we'll learn someday what tptacek expects to find or not to find in it, but he'll have to spend 30 seconds with Google. As I did.
Informing one's self is a pretty low bar for having a productive conversation. When one party can't be arsed to take the initiative to do so, that usually signals the end of useful interaction.
A comment like "I googled and found this paper... it says X... that means Y to me." would feel much less like someone just looking for an argument, because it involves effort and stating a position.
If he has a point, he's free to make it. Everything he needs is at his fingertips, and there's nothing I could do to stop him, nor would I want to. I asked for a point first thing. All I've gotten in response is combative rhetoric which is neither interesting nor informative.
The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.
Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).
They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.
in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:
We’re writing a document “Security dangers of the NIST curves”
Focus on the prime-field NIST curves
DLP news relevant to these curves? No
DLP on these curves seems really hard
So what’s the problem?
Answer: If you implement the NIST curves, chances are you’re doing it wrong
Your code produces incorrect results for some rare curve points
Your code leaks secret data when the input isn’t a curve point
Your code leaks secret data through branch timing
Your code leaks secret data through cache timing
Even more trouble in smart cards: power, EM, etc.
Theoretically possible to do it right, but very hard
Can anyone show us software for the NIST curves done right?
As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.
He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.
Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.
But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.
The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.
See, this gets you into trouble, because Bernstein has actually a pretty batshit take on nothing-up-my-sleeve constructions (see the B4D455 paper) --- and that argument also hurts his position on Kyber, which does NUMS stuff!
I tried a couple searches and I forget which calculator-speak version of "BADASS" Bernstein actually used, but the concept of the paper† is that all the NUMS-style curves are suspect because you can make combinations of mathematical constants say whatever you want them to say (in combination), and so instead you should pick curve constants based purely on engineering excellence, which nobody could ever disagree about or (looks around the room) start huge conspiracy theories over.
Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.
In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange.
Here's the thing. The existence of a standard does not mean we need to use it for most of the internet. There will also be hybrid standards, and most of the rest of us can simply ignore the existence of ML-KEM -only. However, NSA's CNSA 2.0 (commercial cryptography you can sell to the US Federal Government) does not envisage using hybrid schemes. So there's some sense in having a standard for that purpose. Better developed through the IETF than forced on browser vendors directly by the US, I think. There was rough consensus to do this. Should we have a single-cipher kex standard for HQC too? I'd argue yes, and no the NSA don't propose to use it (unless they updated CNSA).
The requirement of the NIST competition is that all standardized algorithms are both classical and PQ-resistant. Some have said in this thread that lattice crypto is relatively new, but it actually has quite some history, going back to Atjai in '97. If you want paranoia, there's always code theory based schemes going back to around '75. We don't know what we don't know, which is why there's HQC (code based) waiting on standardisation and an additional on-ramp for signatures, plus the expensive (size and sometimes statefulness) of hash-based options. So there's some argument that single-cipher is fine, and we have a whole set of alternative options.
This particular overreaction appears to be yet another in a long running series of... disagreements with the entire NIST process, including "claims" around the security level of what we then called Kyber, insults to the NIST team's security level estimation in the form of suggesting they can't do basic arithmetic (given we can't factor anything bigger than 15 on a real quantum computer and we simply don't have hardware anywhere near breaking RSA, estimate is exactly what these are) and so on.
The metaphor near the beginning of the article is a good summary: standardizing cars with seatbelts, but also cars without seatbelts.
Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible: IETF standardization is a great opportunity for a long term social engineering operation, much like DES, Clipper, the more recent funny elliptic curve, etc.
AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
The standardization of an obviously weaker option than more established ones is difficult to explain with security reasons, so the default assumption should be that there are insecurity reasons.
There was lots of public scrutiny of Kyber (ML-KEM); DJB made his own submission to the NIST PQC standardization process. A purposely introduced backdoor in Kyber makes absolutely no sense; it was submitted by 11 respected cryptographers, and analyzed by hundreds of people over the course of standardization.
I disagree that ML-KEM is "obviously weaker". In some ways, lattice-based cryptography has stronger hardness foundations than RSA and EC (specifically, average -> worst case reductions).
ML-KEM and EC are definitely complementary, and I would probably only deploy hybrids in the near future, but I don't begrudge others who wish to do pure ML-KEM.
I don't think anyone is arguing that Kyber is purposefully backdoored. They are arguing that it (and basically every other lattice based method) has lost a minimum of ~50-100 bits of security in the past decade (and half of the stage 1 algorithms were broken entirely). The reason I can only give ~50-100 bits as the amount Kyber has lost is because attacks are progressing fast enough, and analysis of attacks is complicated enough that no one has actually published a reliable estimate of how strong Kyber is putting together all known attacks.
I have no knowledge of whether Kyber at this point is vulnerable given whatever private cryptanalysis the NSA definitely has done on it, but if Kyber is adopted now, it will definitely be in use 2 decades from now, and it's hard to believe that it won't be vulnerable/broken then (even with only publicly available information).
Source for this loss of security? I'm aware of the MATZOV work but you make it sound like there's a continuous and steady improvement in attacks and that is not my impression.
Lots of algorithms were broken, but so what? Things like Rainbow and SIKE are not at all based on the hardness of solving lattice problems.
> AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
Can you elaborate on the standard of scrutiny that you believe AES and RSA (which were standardized at two very different maturation points in applied cryptography) met that hasn't been applied to the NIST PQ process?
I think it's established that NSA backdoors things. It doesn't mean they backdoor everything. But scrutiny is merited for each new thing NSA endorses and we have to wonder and ask why, and it's enough that if we can't explain why something is a certain way and not another, it's not improbable that we should be cautious of that and call it out. This is how they've operated for decades.
Sure. I'm not American either. I agree, maximum scrutiny is warranted.
The thing is these algorithms have been under discussion for quite some time. If you're not deeply into cryptography it might not appear this way, but these are essentially iterations on many earlier designs and ideas and have been built up cumulatively over time. Overall it doesn't seem there are any major concerns that anyone has identified.
But that's not what we're actually talking about. We're talking about whether creating an IETF RFC for people who want to use solely use ML-KEM is acceptable or not - and given the most famous organization proposing to do this is the US Federal Government it seems bizarre in the extreme to accuse them of backdooring what they actually intend to use for themselves. As I said, though, this does not preclude the rest of the industry having and using hybrid KEMs, which given what cloudflare, google etc are doing we likely will.
I will reply directly r.e. the analogy itself here. It is a poor one at best, because it assumes ML-KEM is akin to "internetting without cryptography". It isn't.
If you want a better analogy, we have a seatbelt for cars right now. It turns out when you steal plutonium and hot-rod your DeLorean into a time machine, these seatbelts don't quite cut the mustard. So we need a new kind of seatbelt. We design one that should be as good for the school run as it is for time travel to 1955.
We think we've done it but even after extensive testing we're not quite sure. So the debate is whether to put on two seatbelts (one traditional one we know works for traditional driving, and one that should be good for both) or if we can just use the new one on the school run and for going to 1955.
We are nowhere near DeLoreans that can travel to 1955 either.
The commentor means Dual_EC, a random number generator. The backdoor was patented under the form of "escrow" here: https://patents.google.com/patent/US8396213B2/en?oq=USOO83.9... - replace "escrow" with "backdoor" everywhere in the text and what was done will fall out.
ML-KEM/ML-DSA were adapted into standards by NIST, but I don't think a single American was involved in the actual initial design.
There might be some weakness the NSA knows about that the rest of us don't, but the fact they're going ahead and recommending these be used for US government systems suggests they're fine with it. Unless they want to risk this vulnerability also being discovered by China/Russia and used to read large portions of USG internet traffic. In their position I would not be confident that if I was aware of a vulnerability it would remain secret, although I am not a US Citizen or even resident, and never have been.
Not that I think this is the case for this algorithm, but backdoors like the one in Dual_EC cannot be used by a third party without what is effectively reversing an asymmetric key pair. Their public parameters are the product of private parameters that the NSA potentially has, but if China or whoever can calculate the private parameters from the public ones it’s broken regardless.
Indeed. Dual_EC was a NOBUS backdoor relying on the ECDLP. That's fair.
My point was more that it looked suspicious at the time (why use a trapdoor in a CSPRNG) and at least the possibility of "escrow" was known, as evidenced by the fact that Vanstone (one of the inventors of elliptic curve cryptography) patented said backdoor around 2006.
This suspiciousness simply doesn't apply to ML-KEM, if one ignores one very specific cryptographer.
The problem with standardizing bad crypto options is that you are then exposed to all sorts of downgrade attack possibilities. There's a reason TLS1.3 removed all of the bad crypto algorithms that it had supported.
There were a number of things going on with TLS 1.3 and paring down the algorithm list.
First, we both wanted to get rid of static RSA and standardize on a DH-style exchange. This also allowed us to move the first encrypted message in 1-RTT mode to the first flight from the server. You'll note that while TLS 1.3 supports KEMs for PQ, they are run in the opposite direction from TLS 1.2, with the client supplying the public key and the server signing the transcript, just as with DH.
Second, TLS 1.3 made a number of changes to the negotiation which necessitated defining new code points, such as separating symmetric algorithm negotiation from asymmetric algorithm negotiation. When those new code points were defined, we just didn't register a lot of the older algorithms. In the specific case of symmetric algorithms, we also only. use AEAD-compatible encryption, which restricted the space further. Much of the motivation here was security, but it was also about implementation convenience because implementers didn't want to support a lot of algorithms for TLS 1.3.
It's worth noting that at roughly the same time, TLS relaxed the rules for registering new code points, so that you can register them without an RFC. This allows people to reserve code points for their own usage, but doesn't require the IETF to get involved and (hopefully) reduces pressure on other implementers to actually support those code points.
His concern is that NSA will get vendors to ship code that will prefer ML-KEM, which, not being a hybrid of ECC and PQC, will be highly vulnerable should ML-KEM turn out to be weak, and then there's the concern that it might be backdoored -- that this is a Dual_EC redux.
My professors at Brown were walking on QR lattice cryptography well before 1997, although they may not have been publishing much - NTRU was in active development throughout the mid 1990s when I was there. Heating up by 1997 though, for sure.
> In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange.
No, that's background dressing by now. The bigger issue is how IETF is trying to railroad a standard by violating its own procedures, ignoring all objections, and banning people who oppose it.
They are literally doing the kind of thing we always accuse China of doing. ML-KEM-only is obviously being pushed for political reasons. If you're not willing to let a standard be discussed on its technical merits, why even pretend to have a technology-first industry working group?
Seeing standards being corrupted like this is sickening. At least have the gall openly claim it should be standardized because it makes things easier for the NSA - and by extension (arguably) increasing national security!
The standard will be used, as it was the previous time the IETF allowed the NSA to standardize a known weak algorithm.
Sorry that someone calling out a math error makes the NIST team feel stupid. Instead of dogpiling the person for not stroking their ego, maybe they should correct the error. Last I checked, a quantum computer wasn't needed to handle exponents, a whiteboard will do.
ML-KEM and ML-DSA are not "known weak". The justification for hybrid crypto is that they might have classical cryptanalytical results we aren't aware of, although there's a hardness reduction for lattice problems showing they're NP-hard, while we only suspect RSA+DLog are somewhere in NP. That's reasonable as a maximal-safety measure, but comes with additional cost.
Obviously the standard will be used. As I said in a sibling comment, the US Government fully intends to do this whether the IETF makes a standard or not.
"The government" already have. That's what CNSA 2.0 means - this is the commercial crypto NSA recommend for the US Government and what will be in FIPS/CAVP/CMVP. ML-KEM-only for most key exchange.
In this context, it is largely irrelevant whether the IETF chooses or not to have a single-standard draft. There's a code point from IANA to do this in TLS already and it will happen for US Government systems.
I'd also add that personally I consider NIST P-Curves to be absolutely fine crypto. Complete formula exist, so it's possible to have failure-free ops, although point-on-curve needs to be checked. They don't come with the small-order subgroup problem of any Montgomery curve. ECDSA isn't great alone, the hedged variants from RFC 6979 and later drafts should be used.
Since ML-KEM is key exchange, X25519 is very widely used in TLS unless you need to turn it off for FIPS. For the certificate side, the actual WebPKI, I'm going to say RSA wins out (still) (I think).
Yes: because it took forever for curves to percolate into the WebPKI (as vs. the TLS handshake itself), and by the time they did (1) we had (esp. for TLS) resolved the "safe curves"-style concerns with the P-curves and (2) we were already looking over the horizon to PQ, and so there has been little impetus to forklift in a competing curve design.
While it's true that six others unequivocally opposed adoption, we don't know how many of those oppose the chairs claiming they have consensus. This may be a normal ratio to move forward with adoption, you'd have to look at past IETF proceeding to get a sense for that.
One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others. Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
> One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others.
I don't have context on this other than the linked page, but if what he's saying is accurate, it does seem pretty damning and corrupt, no? Why all the lies and distortions otherwise - how does one assume a generous explanation for lies and distortions?
> I don't have context on this other than the linked page, but if what he's saying is accurate, it does seem pretty damning and corrupt, no?
It's complicated. You'd have to know the rules and read the list archives, and make up your own mind. DJB might be overselling it, so you really do have to check it yourself. I think the WG chair had enough cover to make the call they made. What _I_ would have done is do a WG consensus call on the underlying controversial question once the controversy started, separate from the consensus call on adopting the work item. But I'm not the chair.
> One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others. Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
The IESG though is completely mishandling it. They could discipline him if need be (posting bans for some amount of time) and still hear the appeal. Instead they're sticking their fingers in their ears. DJB might be childish and annoying, but how are they that much better?
> Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
If you alter your official treatment of somebody because they suggested you might be corrupt (in other words, because of personal animus), then you have just confirmed their suggestion.
No, because in this hypothetical you have some authority to discipline that someone. That's what's going on here: DJB is calling out people in the IETF leadership -- people who can dole out posting privileges bans and what not. DJB is most likely going to skirt the line and not go over it, which is what's really tricky here, but the IESG could say they've had enough and discipline him. The trouble is that the underlying controversy does need to be addressed, so the IESG doesn't have completely free hand -- they can end up with a PR problem on their hands.
> So all someone who is being abusive has to do to force me to be stand there and be abused by them is to call me corrupt?
In this example, rectifying concerns is your job, so yes, you have to do it, even if 1 of the 7 parties who hold the concern is a jerk*. Officials can't dispense with rules and procedure just because their feelings are hurt.
If you are actually corrupt**, it isn't abuse. If you aren't, it still isn't abuse. Even if it is abuse, and you deal with it sanctions, you must still rectify the substance of the concerns upheld by 6 other parties.
* 1/7 would be a pretty desirable jerk/total ratio, in my experience
** (and officially behaving differently based on personal animus makes one so)
> That OMB rule, in turn, defines "consensus" as follows: "general agreement, but not necessarily unanimity, and includes a process for attempting to resolve objections by interested parties, as long as all comments have been fairly considered, each objector is advised of the disposition of his or her objection(s) and the reasons why, and the consensus body members are given an opportunity to change their votes after reviewing the comments".
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.
The goal has never been 100%, but it is not enough to merely have a majority opinion.
And to add to that, the blurb you link notes explicitly that for IETF purposes, "rough consensus" is reached when the Chair determines is has been reached.
Yes, but WG chairs are supposed to help. One way to help would have been to do a consensus call on the underlying controversy. Still, I think the chair is in the clear as far as the rules go.
The standard used in the C and C++ committees is essentially a 2-to-1 majority in favor. I'm not aware of any committee where a 3-to-1 majority is insufficient to get an item to pass.
DJB's argument that this isn't good enough would, by itself, be enough for me to route his objections to /dev/null; it's so tedious and snipey that it sours the quality of his other arguments by mere association. And overall, it gives the impression of someone who is more interested in derailing the entire process than in actually trying to craft a good standard.
Standards - especially security-critical ones - shouldn't be a simple popularity contest.
DJB provided lengthy, well-reasoned, and well-sourced arguments against adoption with his "nay" vote. The "aye" votes didn't make a meaningful counter-argument - in most cases they didn't even bother to make any argument at all and merely expressed support.
This means there are clearly unresolved technical issues left - and not just the regular bikeshedding ones. If he'd been the only "nay" vote it might've been something which could be ignored as a mad hatter - but he wasn't. Six other people agreed with him.
Considering the potential conflict of interest, the most prudent approach would be to route the unsubstantiated aye-votes to /dev/null: if you can't explain your vote, how can we be sure your vote hasn't been bought?
So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. Now, I'm a passionate supporter of this feature, for various reasons, that I can (and have, in the committee) brought up. And I know some people who are against this feature, for various reasons that have been brought up. And at the end of the day, it kind of is a popularity contest because weighing an argument of "based on my experience, this is going to be confusing for users" versus "based on my experience, this is not going to be confusing for users" is just a popularity contest among the voters on the committee, admittedly weighted by how much you trust the various people.
And then there's a third category of person (really, just one person I think, though). This is responsible for the vast majority of the email traffic on the topic. They're always ready with a detailed point-by-point reply of any replies to their posts. And their argument is... um... they don't like the feature. And they so don't like the feature that they're hanging on to any scintilla of a process argument to make their displeasure derail the entire feature, without really being able to convince anybody else of their dislike (or being able to be convinced to change their mind to any argument).
Now I don't have the cryptographic chops to evaluate DJB's arguments myself. But I also haven't seen any support for his arguments from people I'd trust to be able to evaluate them. And the way he's responding at this point reminds me very much of that third category of people, which is adversely affecting his credibility at this point.
The really big difference between named loops and cryptography is that if one gets approved and is bad, a couple new programmers get confused, while with the other, a significant chunk of the internet becomes vulnerable to hacking.
Just because a feature is standardized does not mean it gets implemented. This is actually even more true for cryptography than it is for programming language specifications.
The question at hand is whether the IETF will publish an Informational (i.e., non-standard) document defining pure-MLKEM in TLS or whether people will have to read the Internet-Draft currently associated with the code point.
> Just because a feature is standardized does not mean it gets implemented.
This makes no sense. If you think it actually had a high chance of remaining unimplemented it anyway then why not just concede the point and take it out? It sure looks like you're not fine with leaving it unimplemented, and you're doing this because you want it implemented, no? It makes no sense to die on that hill if you're gonna tell people it might not exist.
Also, how do you just completely ignore the fact that standards have been weakened in the past precisely to achieve their implementation? This isn't a hypothetical he's worried about, it has literally happened. You're just claiming it's false despite history blatantly showing the opposite because... why? Because trust me bro?
> So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. (...it) is just a popularity contest
Thankfully cryptography design isn't programming language design, what we have here neither is nor should be a debate or contest over popularity, and the costs of being wrong are enormously different between the two, so you can just sleep easy knowing that your experience doesn't extrapolate to the situation at hand.
There was a recent discussion within the C committee over what exactly constituted consensus owing to a borderline vote that was surprisingly ruled "no consensus" (and the gravitas of the discussion was over the difference between a "no" and an "abstain" vote for consensus purposes). The decision was that it had to be a ⅔ favor/(favor + against), and ¾ (favor + neutral) / (favor + against + neutral). These are the actual rules of the committee now for determining consensus. Similar rules exist for the C++ committee.
If there is any conflation going on, I am not the one doing it.
“ Working groups make decisions through a "rough consensus" process.
IETF consensus does not require that all participants agree although
this is, of course, preferred. In general, the dominant view of the
working group shall prevail. (However, it must be noted that
"dominance" is not to be determined on the basis of volume or
persistence, but rather a more general sense of agreement.) Consensus
can be determined by a show of hands, humming, or any other means on
which the WG agrees (by rough consensus, of course). Note that 51%
of the working group does not qualify as "rough consensus" and 99% is
better than rough. It is up to the Chair to determine if rough
consensus has been reached.”
It's literally the ethos of the IETF going back to (at least) the late 1980s, when this was the primary contrast between IETF standards process vs. the more staid and rigorous OSI process. It's not usefully up for debate.
You may misunderstand how the IETF works. Participation is open. This means that it is possible that people who want the work to fail for their own reasons rather than technical merit can join and attempt to sabotage work.
So consensus by your definition is rarely possible given the structure of the organization itself.
This is why there are rough consensus rules, and why there are processes to proceed with dissent. That is also why you have the ability to temporarily ban people, as you would have with pretty much any well-run open forum.
It is also important to note that the goal of IETF is also to create interoperable protocol standards. That means the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
DJB regularly acts like someone who is attempting to sabotage work. It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published. They regularly resort to personal attacks when they don't get their way, and make arguments that are non-technical in nature (e.g. it is NSA sabotage, and chairs are corrupt agents). And this behavior is self-documented in their blog series.
DJB's behavior is why there are rules for how to address dissent. Unfortunately, after decades DJB still does not seem to realize how self-sabotaging this behavior is.
> the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
In my experience, the average person treats a standard as an acceptable way of doing things. If ML-KEM is a bad thing to do in general, then there should not be a standard for it (because of the aforementioned treatment by the average person).
> It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published.
It's unclear why trying to prevent a bad practice from being standardized is a bad thing. But wait, how do we know whether it's a good or bad practice? Well, we can examine the response to the concerns DJB raised: Whether the responses satisfactorily addressed the concerns, and whether the responses followed the rules and procedures for resolving each of those concerns.
> They regularly resort to personal attacks when they don't get their way
This is certainly unfortunate, but 6 other parties upheld the concerns. DJB is allowed to be a jerk, even allowed to be banned for abusive behavior IMO, however the concerns he initially raised must nonetheless be satisfactorily addressed, even with him banned. Banning somebody is sometimes necessary, but is not an acceptable means of suppressing valid concerns, especially when those concerns are also held by others who are not banned.
> DJB's behavior is why there are rules for how to address dissent.
The issue here seems to be that the bureaucracy might not be following those rules.
"The quantum-safe mechanisms recommended in this Technical Guideline are generally not yet trusted to the same extent as the established classical mechanisms, since they have not been as well studied with regard to side-channel resistance and implementation security. To ensure the long-term security of a key agreement, this Technical Guideline therefore recommends the use of a hybrid key agreement mechanism that combines a quantum-safe and a classical mechanism."
The french position, also quoting the German position:
"As outlined in the previous position paper [1], ANSSI still strongly emphasizes the necessity of hybridation1 wherever post-quantum mitigation is needed both in the short and medium term. Indeed, even if the post-quantum algorithms have gained a lot of attention, they are still not mature enough to solely ensure the security"
Standardizing a codepoint for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.
He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.
> standardizing a code point (literally a number) for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.
I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.
> He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.
He comes with historical and procedural evidence of bad faith. Why is this ridiculous? If you see half the submitted ciphers being broken, and lies and distortions being used to shove the others through, and historical evidence of the NSA using standards as a means to weaken ciphers, why wouldn't you equate that to working for the NSA (or something equally bad)?
> I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.
If an organization wants to force its clients or servers to use pure ML-KEM, they can already do this using any means they like. The standardization of a TLS ciphersuite is besides the point.
> He comes with historical and procedural evidence of bad faith. Why is this ridiculous?
Yes, the NSA has nefariously influenced standards processes. That does not mean that in each and every standards process (especially the ones that don't go your way) you can accuse everyone who disagrees with you, on the merits, of having some ulterior motive or secret relationship with the NSA. That is exactly what he has done repeatedly, both on his blog and on the list.
> why wouldn't you equate that to working for the NSA (or something equally bad)?
For the simple reason that you should not accuse another person of working for the NSA without real proof of that! The standard of proof for an accusation like that cannot be "you disagree with me".
> The standard of proof for an accusation like that cannot be "you disagree with me".
How is that the standard he's applying, though? Just reading his post, it's clearly "you're blatantly and repeatedly lying, and distorting the facts, and not even addressing my arguments". Surely "you disagree with me" is not an accurate characterization of this?
Let's invert that thinking. Imagine you're the "security area director" referenced. You know that DJB's starting point is assumed bad faith on your part, and that because of that starting point DJB appears bound in all cases to assume that you're a malicious liar.
Given that starting point, you believe that anything other than complete capitulation to DJB is going to be rejected. How are you supposed to negotiate with DJB? Should you try?
Your response focuses entirely on the people involved, rather than the substance of the concerns raised by one party and upheld by 6 others. I don't care if 1 of the 7 parties regularly drives busloads of orphans off a cliff, if the concerns have merit, they must be addressed. The job of the director is to capitulate to truth, no matter who voices it.
Any personal insults one of the parties lobs at others can be addressed separately from the concerns. An official must perform their duties without bias, even concerning somebody who thinks them the worst person in the world, and makes it known.
tl;dr: sometimes the rude, loud, angry constituent at the town hall meeting is right
Sunlight is the best disinfectant. I see one group of people shining it and another shading the first group.
Someone who wants to be seen as acting in good faith (and cryptography standards folks should want this), should be addressing the substance of what he said.
Consensus doesn't mean "majority rule", it requires good-faith resolutions (read: not merely responses like 'nuh-uh') to the voiced concerns.
I understand you are smart and are talking about things above my paygrade, but dang can you format the text on your site so it is easier to read please
uhhh... that's mostly on your browser. The css is at the top and pretty skimpy. If it really bothers you, find a styler extension that will override the CSS to render it more pleasingly.
D. J. Bernstein is very well respected and for very good reason. And I don't have firsthand knowledge of the background here, but the blog posts about the incident have been written in a kind of weird voice that make me feel like I'm reading about the US Government suppressing evidence of Bigfoot or something.
Stuff like this
> Wow, look at that: "due process".... Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?"
is both accusing the other party of bad faith and also heavily using sarcasm, which is a sort of performative bad faith.
Sarcasm can be really effective when used well. But when a post is dripping with sarcasm and accusing others of bad faith it comes off as hiding a weak position behind contempt. I don't know if this is just how DJB writes, or if he's adopting this voice because he thinks it's what the internet wants to see right now.
Personally, I would prefer a style where he says only what he means without irony and expresses his feelings directly. If showing contempt is essential to the piece, then the Linus Torvalds style of explicit theatrical contempt is probably preferable, at least to me.
I understand others may feel differently. The style just gives me crackpot vibes and that may color reception of the blog posts to people who don't know DJT's reputation.
ECC is well understood and has not been broken over many years.
ML-KEM is new, and hasn't had the same scrutiny as ECC. It's possible that the NSA already knows how to break this, and has chosen not to tell us, and NIST plays the useful idiot.
NIST has played the useful idiot before, when it promoted Dual_EC_DRBG, and the US government paid RSA to make it the default CSPRNG in their crypto libraries for everyone else... but eventually word got out that it's almost certainly an NSA NOBUS special, and everyone started disabling it.
Knowing all that, and planning for a future where quantum computers might defeat ECC -- it's not defeated yet, and nobody knows when in the future that might happen... would you choose:
Option A): encrypt key exchange with ECC and the new unproven algorithm
Option B): throw out ECC and just use the new unproven algorithm
NIST tells you option B is for the best. NIST told you to use Dual_EC_DRBG. W3C adopted EME at the behest of Microsoft, Google and Netflix. Microsoft told you OOXML is a valid international standard you should use instead of OpenDocument (and it just so happens that only one piece of software, made by Microsoft, correctly reads and writes OOXML). So it goes on. Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
LWE cryptography is probably better understood now than ECDH was in 2005, when Bernstein published Curve25519, but I think you'll have a hard time finding where Bernstein recommended hybrid RSA/ECDH key exchanges.
> Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
FWIW, in my experience on standardization committees, the worst example I've seen of rules-lawyering to drive standards changes is... what DJB's doing right now. There's a couple of other egregious examples I can think of, where people advocating against controversial features go in full rules-lawyer mode to (unsuccessfully) get the feature pulled. I've never actually seen any controversial feature make it into a standard because of rules-lawyering.
What exactly are you calling "rules-lawyering"? Is citing rules and pointing out their blatant violation "rules-lawyering"? If so, can you explain why it is better to avoid this, and what should be done instead?
As an outsider I'd understand it differently: reading rules and pointing out their lack of violation (perhaps in letter), when people feel like you violated it (perhaps in spirit), is what would be rules-lawyering. You're agreeing on what the written rules are, but interpreting actions as following vs. violating them.
That's quite different from an accusation of rules violation followed by silence or distortions or outright lies.
If someone is pointing out that you're violating the rules and you're lying or staying silent or distorting the facts, you simply don't get to dismiss or smear them with a label like "rules-lawyer". For rules to be followed, people have to be able to enforce them. Otherwise it's just theater.
Thank you, that seems to be the whole ball game for me right there. I understood the sarcastic tone as kind of exasperation, but it means something in the context of an extremely concerning attempt to ram through a questionable algorithm that is not well understood and risks a version of an NSA backdoor, and the only real protection would be integrity of standards adoptions processes like this one. You've really got to stick with the substance over the tone to be able to follow the ball here. Everyone was losing their minds over GDPR introducing a potential back door to encrypted chat apps that security agencies could access. This goes to the exact same category of concern, and as you note it has precedent!
So yeah, NSA potentially sneaking a backdoor into an approved standard is pretty outrageous, and worth objecting to in strongest terms, and when that risk is present it should be subjected to the highest conceiveable standard of scrutiny.
In fact, I found this to be the strongest point in the article - there's any number of alternatives that might (1) prove easier to implement, (2) prove more resilient to future attacks (3) turn out to be the most efficient.
Just because you want to do something in the future doesn't mean it needs to be ML-KEM specifically, and the idea of throwing out ECC is almost completely inexplicable unless you're the NSA and you can't break it and you're trying to propose a new standard that doesn't include it.
I understand the cryptography and I agree with his analysis of the cryptographic situation.
What I don't understand is why -- assuming he thinks this is important -- he's chosen to write the bits about the standardization process in a way that predisposes readers against his case?
Sure! First, while I’m in no position to judge cryptographic algorithms, the success of cha-cha and 25519 speak for themselves. More prosaically, patriecia/critbit trees and his other tools are the right thing, and foresighted. He’s not just smart, but also prolific.
However, he’s left a wake of combative controversy his entire career, of the “crackpot” type the parent comment notes, and at some point it’d be worth his asking, AITA? Second, his unconditional support of Jacob Appelbaum has been bonkers. He’s obviously smart and uncompromising but, despite having been in the right on some issues, his scorched earth approach/lack of judgment seems to have turned his paranoia about everyone being out to get him into a self-fulfilling prophecy.
Please ELI5: what is the argument for including the option for the non-hybrid option in this standard? Is it a good argument in your expert opinion?
My pea brain: implementers plus options equals bad, newfangled minus entrenched equals bad, alice only trust option 1 but bob only have option 2 = my pea brain hurt!
It does not preclude other post-quantum algorithms from being described for use with TLS 1.3. It also does not preclude hybrid approaches from being used with TLS 1.3.
It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.
There is no MTI (mandatory-to-implement) once these are documented from the IETF directly, but there could be market and regulatory pressures.
My suspicion is that this is bleed-out from a larger (and uglier) fight in the sister organization, the IRTF. There, the crypto forum research group (CFRG) has been having discussions on KEMs which have gotten significantly more heated.
A person with concern that there may be weaknesses in a post quantum technique may want a hybrid option to provide additional security. They may then be concerned that standardization of non-hybrid options would discourage hybrid usage, where hybrid is not yet standardized and would likely be standardized later (or not at all).
The pressure now with post quantum is to create key negotiation algorithms are not vulnerable to theoretical post quantum computer attack. This is because of the risk of potentially valuable encrypted traffic being logged now in the hopes that it could later be targeted by a post-quantum computer.
Non-negotiated encrypted (e.g. just using a static AES key) is already safe, and signature algorithms can be updated much closer to viable attacks to protect transactional data.
> It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.
Normal practice in deploying post-quantum cryptography is to deploy ECC+PQ. IETF's TLS working group is standardizing ECC+PQ. But IETF management is also non-consensually ramming a particular NSA-driven document through the IETF process, a "non-hybrid" document that adds just PQ as another TLS option.
The NSA has railroaded bad crypto before [1]. The correct answer is to just ignore it, to say "okay, this is the NSA's preferred backdoored crypto standard, and none of our actual implementations will support it."
It is not acceptable for the government to be forcing bad crypto down our throats, it is not acceptable for the NSA to be poisoning the well this way, but for all I respect DJB, they are "playing the game" and 20 to 7 is consensus.
An employee doesn’t act as an official representative of their employer nor do they speak for the employee in any official capacity. That is what the message says.
The informal also didn’t cloak their identity (implies some malicious intent), they simple did not use their work email. Nothing wrong with that.
@dang, can we establish a rule that NSA apologists should not be doxxing HN members for the sin of advocating against the NSA's preferred narratives and worldview?
Deliberate personal breaches of privacy against HN members as a response to the contents of their speech like this stifle free discourse to the highest degree possible and should be banned or at least harshly admonished, no?
It's not really "doxing" when the public username they chose to use is their actual name, leading directly to their github profile, and their arguing that you always represent your employer, even if you "cloak" yourself in an alternate name.
Saying that it is a "breach of privacy" when the relevant details are being advertised by the person in question is silly.
what do you expect, when the tagline at the end of the page says "In crypto we trust."?
Honestly, it's a bit sad. There are many great people on that list, but some seem a bit random and some are just straight up cryptobros, which makes the whole thing a joke, unfortunately
Name calling, bullying (forms of systematic harassment) and attempting to instill feelings of social isolation in a target are documented techniques employed by intelligence agencies in both online and offline discourse manipulation / information warfare.
Can you please stop spam-submitting this AI-generated Hall of Fame website? It's against HN guidelines to use the website primarily for promotion and it's clearly what you're doing here.
For context, djb has been doing and saying these things since he was a college student:
Source https://www.eff.org/cases/bernstein-v-us-dept-justicedjb has earned my massive respect for how consistent he's been in this regard. I love his belligerence towards authoritarian overreach in this regard. Him, Phil Zimmermann, Richard Stallman, and all are owed great respect for their insistence on their principles which have paid massive dividends to all of us through the freedom and software that has been preserved and become possible through them. I appreciate them immensely and I think we all owe them a debt of gratitude for their sacrifices, because they all paid a heavy price for their advocacy over time.
Massive respect from me as well. Insisting on principles is extremely tiring and demoralizing. Doing the right thing constantly requires some serious sacrifice.
The whole world ignores the principles out of convenience. Principles are thrown out the window at the first sign of adversity. People get rich by corrupting and violating principles. It seems like despite all efforts the corrupting forces win anyway. I have no idea how these people find the willpower to keep fighting literal government agencies.
That's the right pantheon, I think. Bernstein, Zimmerman, Stallman.
That was when he had the legal expertise of the EFF to help him make his case. Later he decided to represent himself in court and failed
> This time, he chose to represent himself, although he had no formal legal training. On October 15, 2003, almost nine years after Bernstein first brought the case, the judge dismissed it....
https://en.wikipedia.org/wiki/Bernstein_v._United_States
> Later he decided to represent himself in court and failed
To be more specific, the government broke out their get out of court free card and claimed they weren't threatening to prosecute him even though they created a rule he was intending to violate. It's a dirty trick the government uses when they're afraid you're going to win so they can get the case dismissed without the court making a ruling.
Amongst the numerous reasons why you _don't_ want to rush into implementing new algorithms is even the _reference implementation_ (and most other early implementations) for Kyber/ML-KEM included multiple timing side channel vulnerabilities that allowed for key recovery.[1][2]
djb has been consistent in view for decades that cryptography standards need to consider the foolproofness of implementation so that a minor implementation mistake specific to timing of specific instructions on specific CPU architectures, or specific compiler optimisations, etc doesn't break the implementation. See for example the many problems of NIST P-224/P-256/P-384 ECC curves which djb has been instrumental in fixing through widespread deployment of X25519.[3][4][5]
[1] https://cryspen.com/post/ml-kem-implementation/
[2] https://kyberslash.cr.yp.to/faq.html / https://kyberslash.cr.yp.to/libraries.html
[3] https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplic...
[4] https://safecurves.cr.yp.to/ladder.html
[5] https://cr.yp.to/newelliptic/nistecc-20160106.pdf
Given the emphasis on reliability of implementations of an algorith, it's ironic that the Curve 25519-based Ed25519 digital signature standard was itself specified and originally implemented in such a way as to lead to implementation divergence on what a valid and invalid signature actually was. See https://hdevalence.ca/blog/2020-10-04-its-25519am/
Not a criticism, if anything it reinforces DJB's point. But it makes clear that ease of (proper) implementation also needs to cover things like proper canonicalization of relevant security variables and that supporting multiple modes of operation doesn't actually lead to different answers of security questions meant to give the same answer.
This logic does not follow. Your argument seems to be "the implementation has security bugs, so let's not ratify the standard." That's not how standards work though. Ensuring an implementation is secure is part of the certification process. As long as the scheme itself is shown to be provably secure, that is sufficient to ratify a standard.
If anything, standardization encourages more investment, which means more eyeballs to identify and plug those holes.
No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one. This is a property of the algorithm being specified, not just an individual implementation, and we’ve seen it play out over and over again in cryptography.
I’d actually like to see more (non-cryptographic) standards take this into account. Many web standards are so complicated and/or ill-specified that trillion dollar market cap companies have trouble implementing them correctly/consistently. Standards shouldn’t just be thrown over the wall and have any problems blamed on the implementations.
> No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one.
This argument is without merit. ML-KEM/Kyber has already been ratified as the PQC KEM standard by NIST. What you are proposing is that the NIST process was fundamentally flawed. This is a claim that requires serious evidence as backup.
You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"
NIST can adopt and recommend whatever algorithms they might like using whatever criteria they decide they want to use. However, while the amount of expertise and experience on display by NIST in identifying algorithms that are secure or potentially useful is impressive, there is no amount of expertise or experience that guarantees any given implementation is always feasible.
Indeed, this is precisely why elliptic curve algorithms are often not available, in spite of a NIST standard being adopted like 8+ years ago!
I'm having trouble understanding your argument. Elliptic curve algorithms have been the mainstream standard for key establishment for something like 15 years now. The NIST standards for the P-curves are much, much older than 8 years.
> You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"
If we did that we'd all be using Dual_EC...
DJB has specific (technical and non-conspiratorial) bones to pick with the algorithm. He’s as much an expert in cryptographic implementation flaws and misuse resistance as anybody at NIST. Doesn’t mean he’s right all the time, but blowing him off as if he’s just some crackpot isn’t even correctly appealing to authority.
I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
There are like 3 cryptographers in all of NIST. NIST was a referee in the process. The bones he's picking are with the entire field of cryptography, not just NIST people.
> The bones he's picking are with the entire field of cryptography
Isn't that how you advance a field, though?
It has been a couple hundred years, but we used to think that disease was primarily caused by "bad humors".
Fields can and do advance. I'm not versed enough to say whether his criticisms are legitimate, but this doesn't sound like a problem, but part of the process, to me (and his article is documenting how some bureaucrats/illegitimate interests are blocking that advancement).
The "area adminstrator" being unable or unwilling to do basic math is both worrying, and undermines the idea that the standards that are being produced are worth anything, which is bad for the entire field.
If the standards are chock full of nonsense, then how does that reflect upon the field?
The standards people have problems with weren't run as open processes the way AES, SHA3, and MLKEM were. As for the rest of it: I don't know what to tell you. Sounds like a compelling argument if you think Daniel Bernstein is literally the most competent living cryptographer, or, alternately, if Bernstein and Schneier are the only cryptographers one can name.
In a lot of ways this seems, from the outside, to be similar to "Planck's principle"; e.g. physics advances one funeral at a time.
In exactly what sense? Who is the "old guard" you're thinking of here? Peter Schwabe got his doctorate 16 years after Bernstein. Peikert got his 10 years after.
They may not be involved with this process, but ITL has way more than 3 cryptographers.
> I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.
Currently he argues that NSA is likely to be attacking the standards process to do some unspecified nefarious thing in PQ algorithms, and he's appealing to our memories of Dual_EC. That's not tinfoil hat stuff! It's a serious possibility that has happened before (Dual_EC). True, no one knows for a fact that NSA backdoored Dual_EC, but it's very very likely that they did -- why bother with such a slow DRBG if not for this benefit of being able to recover session keys?
NSA wrote Dual EC. A team of (mostly European) academic cryptographers wrote the CRYSTALS constructions. Moreover, the NOBUS mechanism in Dual EC is obvious, and it's not at all clear where you'd do anything like that in Kyber, which goes out of its way not to have the "weird constants" problem that the P-curves (which practitioners generally trust) ended up with.
It took a couple of years to get the suspicion about Dual_EC out.
No it didn't. The problem with Dual EC was published in a rump session at the next CRYPTO after NIST published it. The widespread assumption was that nobody was actually using it, which was enabled by the fact that the important "target" implementations (most importantly RSA BSAFE, which I think a lot of people also assumed wasn't in common use, but I may just be saying that because it's what I myself assumed) were deeply closed-source.
None of this applies to anything else besides Dual EC.
That aside: I don't know what this has to do with anything I just wrote. Did you mean to respond to some other comment?
It's more like "the standard makes it easier to create insecure implementations." Our standards shouldn't just be "sufficient" they should be "robust."
this is like saying just use C and don't write any memory bugs. possible, but life could be a lot better if it weren't so easy to do so.
Great, you’ve just convinced every C programmer to use a hand rolled AES implementation on their next embedded device. Only slightly joking.
If the standard had clear algorhitm -> source code, thrn couldnt everyone copy from there though?
AES is actually a good example of why this doesn’t work in cryptography. Implementing AES without a timing side channel in C is pretty much impossible. Each architecture requires specific and subtle constructions to ensure it executes in constant time. Newer algorithms are designed to not have this problem (DJB was actually the one who popularized this approach).
Reconcile this claim with, for instance, aes_ct64 in Thomas Pornin's BearSSL?
I'm familiar with Bernstein's argument about AES, but AES is also the most successful cryptography standard ever created.
Okay, I should've said implementing AES in C without a timing sidechannel performantly enough to power TLS for a browser running on a shitty ARMv7 phone is basically impossible. Also if only Thomas Pornin can correctly implement your cipher without assembly, that's not a selling point.
I'm not contesting AES's success or saying it doesn't deserve it. I'm not even saying we should move off it (especially now that even most mobile processors have AES instructions). But nobody would put something like a S-Box in a cipher created today.
If your point is "reference implementations have never been sufficient for real-world implementations", I agree, strongly, but of course that cuts painfully across several of Bernstein's own arguments about the importance of issues in PQ reference implementations.
Part of this, though, is that it's also kind of an incoherent standard to hold reference implementations to. Science proceeds long after the standard is written! The best/safest possible implementation is bound to change.
I don't think it's incoherent. On one extreme you have web standards, where it's now commonplace to not finalize standards until they're implemented in multiple major browser engines. Some web-adjacent IETF standards also work like this (WebTransport over HTTP3 is one I've been implementing recently).
I'm not saying cryptography should necessarily work this way, but it's not an unworkable policy to have multiple projects implement a draft before settling on a standard.
Look at the timeline for performant non-leaking implementations of Weierstrass curves. How long are you going to wait for these things to settle? I feel like there's also a hindsight bias that slips into a lot of this stuff.
Certainly, if you're going to do standards adoption by open competition the way NIST has done with AES, SHA3, and MLKEM, you're not going to be able to factor multiple major implementations into your process.
This isn’t black and white. There’s a medium between:
* Wait for 10 years of cryptanalysis (specific to the final algorithm) before using anything, which probably will be relatively meager because nobody is using it
* Expect the standardization process itself to produce a blessed artifact, to be set on fire as a false god if it turns out to be imperfect (or more realistically, just cause everybody a bunch of pain for 20 years)
Nothing would stop NIST from adding a post-competition phase where Google, Microsoft, Amazon, whoever the hell is maintaining OpenSSL, and maybe Mozilla implement the algorithm in their respective libraries and kick the tires. Maybe it’s pointless and everything we’d expect to get from cryptographers observing that process for a few months to a year has already been suitably covered, and DJB is just being prissy. I don’t know enough about cryptanalysis to know.
But I do feel very confident that many of the IETF standards I’ve been on the receiving end of could have used a non-reference implementation phase to find practical, you-could-technically-do-it-right-but-you-won’t issues that showed up within the first 6 months of people trying to use the damn thing.
I don't know what you mean by "kick the tires".
If by that you mean "perfect the implementation", we already get that! The MLKEM in Go is not the MLKEM in OpenSSL is not the MLKEM in AWS-LC.
If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable. It's the publication of the standard itself that is the forcing function for high-quality competing implementations. In particular, part of arriving at high-quality implementations is running them in production, which is something you can't do without solving the coordination problem of getting everyone onto the same standard.
Here it's important to note that nothing we've learned since Kyber was chosen has materially weakened the construction itself. We've had in fact 3 years now of sustained (urgent, in fact) implementation and deployment (after almost 30 years of cryptologic work on lattices). What would have been different had Kyber been a speculative or proposed standard, other than it getting far less attention and deployment?
("Prissy" is not the word I personally would choose here.)
I mean have a bunch of competent teams that (importantly) didn’t design the algorithm read the final draft and write their versions of it. Then they and others can perform practical analysis on each (empirically look for timing side channels on x86 and ARM, fuzz them, etc.).
> If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable.
The forcing function can potentially be: this final draft is the heir apparent. If nothing serious comes up in the next 6 months, it will be summarily finalized.
It’s possible this won’t get any of the implementers off their ass on a reasonable timeframe - this happens with web standards all the time. It’s also possible that this is very unlikely to uncover anything not already uncovered. Like I said, I’m not totally convinced that in this specific field it makes sense. But your arguments against it are fully general against this kind of phased process at all, and I think it has empirically improved recent W3C and IETF standards (including QUIC and HTTP2/3) a lot compared to the previous method.
Again: that has now happened. What have we learned from it that we needed to know 3 years ago when NIST chose Kyber? That's an important question, because this is a whole giant thread about Bernstein's allegation that the IETF is in the pocket of the NSA (see "part 4" of this series for that charming claim).
Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers. All of them had the knowhow and incentive to write implementations of their constructions and, if it was going to showcase some glaring problem, of their competitors. What makes you think that we lacked implementation understanding during this process?
I don’t think IETF is in the pocket of the NSA. I really wish the US government hadn’t hassled Bernstein so much when he was a grad student, it would make his stuff way more focused on technical details and readable without rolling your eyes.
> Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers.
That’s actually my point! When you’re trying to figure out if your standard is difficult to implement correctly, that everyone who worked on the reference implementations is a genius who understands it perfectly is a disadvantage for finding certain problems. It’s classic expert blindness, like you see with C++ where the people working on the standard understand the language so completely they can’t even conceive of what will happen when it’s in the hands of someone that doesn’t sleep with the C++ standard under their pillow.
Like, would anyone who developed ECC algorithms have forgotten to check for invalid curve points when writing an implementation? Meanwhile among mere mortals that’s happened over and over again.
I don't think this has much of anything to do with Bernstein's qualms with the US government. For all his concerns about NIST process, he himself had his name on a NIST PQC candidate. Moreover, he's gotten into similar spats elsewhere. This isn't even the first time he's gotten into a heap of shit at IETF/IRTF. This springs to mind:
https://mailarchive.ietf.org/arch/msg/cfrg/qqrtZnjV1oTBHtvZ1...
This wasn't about NSA or the USG! Note the date. Of course, had this happened in 2025, we'd all know about it, because he'd have blogged it.
But I want to circle back to the point I just made: you've said that we'd all be better off if there was a burning-in period for implementors before standards were ratified. We've definitely burnt in MLKEM now! What would we have done differently knowing what we now know?
> What would we have done differently knowing what we now know?
With the MLKEM standard? Probably nothing, Bernstein would have done less rambling in these blog posts if he was aware of something specifically wrong with one of the implementations. My key point here was that establishing an implementation phase during standardization is not an incoherent or categorically unjustifiable idea, whether it makes sense for massive cryptographic development efforts or not. I will note that something not getting caught by a potential process change is a datapoint that it’s not needed, but isn’t dispositive.
I do think there is some baby in the Bernstein bathwater that is this blog post series though. His strongest specific point in these posts was that the TLS working group adding a cipher suite with a MLKEM-only key exchange this early is an own goal (but that’s of course not the fault of the MLKEM standard itself). That’s an obvious footgun, and I’ll miss the days when you could enable all the standard TLS 1.3 cipher suites and not stress about it. The arguments to keep it in are legitimately not good, but in the area director’s defense we’re all guilty of motivated reasoning when you’re talking to someone who will inevitably accuse you of colluding with the NSA to bring about 1984.
In what way is adding an MLKEM-only code point an "own goal"? Exercise for the reader: find the place where Bernstein proposed we have hybrid RSA/ECDH ciphersuites.
Yeah except there are certified versions of AES written in C. Which makes your point what exactly?
> See for example the many problems of NIST P-224/P-256/P-384 ECC curves
What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?
As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.
On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.
This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.
I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.
In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.
Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.
> As I understand it, a big issue is that they are really hard to implement correctly.
Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.
I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.
I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.
That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".
It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.
Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.
With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that
It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.
When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?
yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels — which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.
The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc
Aren't the timing attacks you're talking about specific to oddball parameters for the handshake? If you're doing Dragonfly with Brainpool curves you're specifically not doing what NSA wants you to do. Brainpool curves are literally a rejection of NIST's curves.
If you haven't read the Enigma paper, you should do so before confidently stating that nobody's done "sanity checks" on the P-curves. Its authors are approximately as authoritative on the subject as Aaronson is on his. I am specifically not talking about the question of NSA's recommendation on ECC vs. PQ; I'm talking about the integrity of the P-curve selection, in particular. You need to read the paper to see the argument I'm making; it's not in the abstract.
Ah now I see what the question was as it seemed like a non sequitur. I misunderstood the comment by foxboron to be concerns about any backdoors not that P256 is backdoored, I hold no such view of that, surely bitcoin should be good evidence.
Instead I was stating that weaknesses in cryptography have been historically put there with some NSA involvement at times.
For DB: The brain pool curves do have a worse leak, but as stated in the dragon blood paper “we believe that these sidechannels are inherent to Dragonfly”. The first attack submission did hit P-256 setups before the minimal iteration count was increased and afterward was more applicable to same-system cache/ micro architectural bugs. These attacks were more generally correctly mitigated when H2C deterministic algorithms rolled out. There’s many bad choices that were selected of course to make the PAKE more exploitable, putting the client MAC in the pre commits, having that downgrade, including brain pool curves. but to my point on committees— cryptographers warned strongly when standardizing that this could be an attack and no course correction was taken.
Can I ask you to respond to the "sanity check" argument you made upthread? What is the "sanity checking" you're implying wasn't done on the P-curves?
I wasn’t talking about P curves, I was talking about NSA having acted as a malicious actor in general so I misunderstood their comment
The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.
That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.
> And since then we have an NSA employee co-authoring the paper which led to Heartbleed
I'm confused as to what "the paper which led to Heartbleed" means. A paper proposing/describing the heartbeat extension? A paper proposing its implementation in OpenSSL? A paper describing the bug/exploit? Something else?
And in addition to that, is there any connection between that author and the people who actually wrote the relevant (buggy) OpenSSL code? If the people who wrote the bug were entirely unrelated to the people authoring the paper then it's not clear to me why any blame should be placed on the paper authors.
> I'm confused
The original paper which proposed the OpenSSL Heartbeat extension was written by two people, one worked for NSA and one was a student at the time who went on to work for BND, the "German NSA". The paper authors also wrote the extension.
I know this because when it happened, I wanted to know who was responsible for making me patch all my servers, so I dug through the OpenSSL patch stream to find the authors.
What does that paper say about implementing the TLS Heartbeat extension with a trivial uninitialized buffer bug?
About as much as Jia Tan said about implementing the XZ backdoor via an inconspicuous typo in a CMake file. What's your point?
I'm asking what the paper has to do with the vulnerability. Can you answer that? Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
> Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".
This statement makes it clear to me that you don't understand a thing I've said, and that you don't have the necessary background knowledge of Heartbleed, the XZ backdoor, or concepts such a plausible deniability to engage in useful conversation about any of them. Else you would not be so confused.
Please do some reading on all three. And if you want to have a conversation afterwards, feel free to make a comment which demonstrates a deeper understanding of the issues at hand.
Sorry, you're not going to be able to bluster your way through this. What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
> What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?
That's a very easy question to answer: the implementation the authors provided alongside it.
If you expect authors of exploits to clearly explain them to you, you are not just ignorant of the details of backdoors like the one in XZ (CMake was never backdoored, a "typo" in a CMake file bootstrapped the exploit in XZ builds), but are naive to an implausible degree about the activities of exploit authors.
Even the University of Minnesota did not publicly state "we're going to backdoor the Linux kernel" before they attempted to do so: https://cyberir.mit.edu/site/how-university-got-itself-banne...
If you tell someone you're going to build an exploit and how, the obvious response will be "no, we won't allow you to." So no exploit author does that.
Which "paper" are you referring to?
Think the above poster is full of bologna? It's less painful for everyone involved, and the readers, to just say that and get that out of the way rather than trying to surgically draw it out over half a dozen comments. I see you do this often enough that I think you must get some pleasure out of making people squirm. We know you're smart already!
I think their argument is verkakte but I literally don't know what they're talking about or who the NSA stooge they're referring to is, and it's not so much that I want to make them squirm so much as that I want to draw the full argument out.
I think your complaint isn't with me, but with people who hedge when confronted with direct questions. I think if you look at the thread, you'll see I wasn't exactly playing cards close to my chest.
I don't make a habit of googling things for people when they could do it just as quickly themselves. There is only one paper proposing the OpenSSL heartbeat feature. So I have not been unclear, nor can there be any confusion about which it is. Perhaps we'll learn someday what tptacek expects to find or not to find in it, but he'll have to spend 30 seconds with Google. As I did.
Informing one's self is a pretty low bar for having a productive conversation. When one party can't be arsed to take the initiative to do so, that usually signals the end of useful interaction.
A comment like "I googled and found this paper... it says X... that means Y to me." would feel much less like someone just looking for an argument, because it involves effort and stating a position.
If he has a point, he's free to make it. Everything he needs is at his fingertips, and there's nothing I could do to stop him, nor would I want to. I asked for a point first thing. All I've gotten in response is combative rhetoric which is neither interesting nor informative.
Ah, that clears up the confusion. Thank you for taking the time to explain!
The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.
Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).
They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.
in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:
As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.
Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.
But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.
The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.
Quite true, but the Dual_EC backdoor claim is serious. DJB's point that we should design curves with "nothing up my sleeve" is a nice touch.
See, this gets you into trouble, because Bernstein has actually a pretty batshit take on nothing-up-my-sleeve constructions (see the B4D455 paper) --- and that argument also hurts his position on Kyber, which does NUMS stuff!
Link?
I tried a couple searches and I forget which calculator-speak version of "BADASS" Bernstein actually used, but the concept of the paper† is that all the NUMS-style curves are suspect because you can make combinations of mathematical constants say whatever you want them to say (in combination), and so instead you should pick curve constants based purely on engineering excellence, which nobody could ever disagree about or (looks around the room) start huge conspiracy theories over.
† as I remember it
Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.
Dual_EC's backdoor can't be proven, but it's almost certainly real.
In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange.
Here's the thing. The existence of a standard does not mean we need to use it for most of the internet. There will also be hybrid standards, and most of the rest of us can simply ignore the existence of ML-KEM -only. However, NSA's CNSA 2.0 (commercial cryptography you can sell to the US Federal Government) does not envisage using hybrid schemes. So there's some sense in having a standard for that purpose. Better developed through the IETF than forced on browser vendors directly by the US, I think. There was rough consensus to do this. Should we have a single-cipher kex standard for HQC too? I'd argue yes, and no the NSA don't propose to use it (unless they updated CNSA).
The requirement of the NIST competition is that all standardized algorithms are both classical and PQ-resistant. Some have said in this thread that lattice crypto is relatively new, but it actually has quite some history, going back to Atjai in '97. If you want paranoia, there's always code theory based schemes going back to around '75. We don't know what we don't know, which is why there's HQC (code based) waiting on standardisation and an additional on-ramp for signatures, plus the expensive (size and sometimes statefulness) of hash-based options. So there's some argument that single-cipher is fine, and we have a whole set of alternative options.
This particular overreaction appears to be yet another in a long running series of... disagreements with the entire NIST process, including "claims" around the security level of what we then called Kyber, insults to the NIST team's security level estimation in the form of suggesting they can't do basic arithmetic (given we can't factor anything bigger than 15 on a real quantum computer and we simply don't have hardware anywhere near breaking RSA, estimate is exactly what these are) and so on.
The metaphor near the beginning of the article is a good summary: standardizing cars with seatbelts, but also cars without seatbelts.
Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible: IETF standardization is a great opportunity for a long term social engineering operation, much like DES, Clipper, the more recent funny elliptic curve, etc.
> Since ML-KEM is supported by the NSA, it should be assumed to have a NSA-known backdoor that they want to be used as much as possible
AES and RSA are also supported by the NSA, but that doesn’t mean they were backdoored.
AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
The standardization of an obviously weaker option than more established ones is difficult to explain with security reasons, so the default assumption should be that there are insecurity reasons.
There was lots of public scrutiny of Kyber (ML-KEM); DJB made his own submission to the NIST PQC standardization process. A purposely introduced backdoor in Kyber makes absolutely no sense; it was submitted by 11 respected cryptographers, and analyzed by hundreds of people over the course of standardization.
I disagree that ML-KEM is "obviously weaker". In some ways, lattice-based cryptography has stronger hardness foundations than RSA and EC (specifically, average -> worst case reductions).
ML-KEM and EC are definitely complementary, and I would probably only deploy hybrids in the near future, but I don't begrudge others who wish to do pure ML-KEM.
I don't think anyone is arguing that Kyber is purposefully backdoored. They are arguing that it (and basically every other lattice based method) has lost a minimum of ~50-100 bits of security in the past decade (and half of the stage 1 algorithms were broken entirely). The reason I can only give ~50-100 bits as the amount Kyber has lost is because attacks are progressing fast enough, and analysis of attacks is complicated enough that no one has actually published a reliable estimate of how strong Kyber is putting together all known attacks.
I have no knowledge of whether Kyber at this point is vulnerable given whatever private cryptanalysis the NSA definitely has done on it, but if Kyber is adopted now, it will definitely be in use 2 decades from now, and it's hard to believe that it won't be vulnerable/broken then (even with only publicly available information).
Source for this loss of security? I'm aware of the MATZOV work but you make it sound like there's a continuous and steady improvement in attacks and that is not my impression.
Lots of algorithms were broken, but so what? Things like Rainbow and SIKE are not at all based on the hardness of solving lattice problems.
> AES and RSA had enough public scrutiny to make backdooring backdoors imprudent.
Can you elaborate on the standard of scrutiny that you believe AES and RSA (which were standardized at two very different maturation points in applied cryptography) met that hasn't been applied to the NIST PQ process?
SHA-2 was designed by the NSA. Nobody is saying there is a backdoor.
I think it's established that NSA backdoors things. It doesn't mean they backdoor everything. But scrutiny is merited for each new thing NSA endorses and we have to wonder and ask why, and it's enough that if we can't explain why something is a certain way and not another, it's not improbable that we should be cautious of that and call it out. This is how they've operated for decades.
Sure. I'm not American either. I agree, maximum scrutiny is warranted.
The thing is these algorithms have been under discussion for quite some time. If you're not deeply into cryptography it might not appear this way, but these are essentially iterations on many earlier designs and ideas and have been built up cumulatively over time. Overall it doesn't seem there are any major concerns that anyone has identified.
But that's not what we're actually talking about. We're talking about whether creating an IETF RFC for people who want to use solely use ML-KEM is acceptable or not - and given the most famous organization proposing to do this is the US Federal Government it seems bizarre in the extreme to accuse them of backdooring what they actually intend to use for themselves. As I said, though, this does not preclude the rest of the industry having and using hybrid KEMs, which given what cloudflare, google etc are doing we likely will.
One does not place backdoors in hash algorithms. It's much more interesting to place backdoors in key agreement protocols.
How would NSA have "placed" a backdoor in Kyber? NSA didn't write Kyber.
I will reply directly r.e. the analogy itself here. It is a poor one at best, because it assumes ML-KEM is akin to "internetting without cryptography". It isn't.
If you want a better analogy, we have a seatbelt for cars right now. It turns out when you steal plutonium and hot-rod your DeLorean into a time machine, these seatbelts don't quite cut the mustard. So we need a new kind of seatbelt. We design one that should be as good for the school run as it is for time travel to 1955.
We think we've done it but even after extensive testing we're not quite sure. So the debate is whether to put on two seatbelts (one traditional one we know works for traditional driving, and one that should be good for both) or if we can just use the new one on the school run and for going to 1955.
We are nowhere near DeLoreans that can travel to 1955 either.
> the more recent funny elliptic curve
Can you elaborate please?
The commentor means Dual_EC, a random number generator. The backdoor was patented under the form of "escrow" here: https://patents.google.com/patent/US8396213B2/en?oq=USOO83.9... - replace "escrow" with "backdoor" everywhere in the text and what was done will fall out.
ML-KEM/ML-DSA were adapted into standards by NIST, but I don't think a single American was involved in the actual initial design.
There might be some weakness the NSA knows about that the rest of us don't, but the fact they're going ahead and recommending these be used for US government systems suggests they're fine with it. Unless they want to risk this vulnerability also being discovered by China/Russia and used to read large portions of USG internet traffic. In their position I would not be confident that if I was aware of a vulnerability it would remain secret, although I am not a US Citizen or even resident, and never have been.
Not that I think this is the case for this algorithm, but backdoors like the one in Dual_EC cannot be used by a third party without what is effectively reversing an asymmetric key pair. Their public parameters are the product of private parameters that the NSA potentially has, but if China or whoever can calculate the private parameters from the public ones it’s broken regardless.
Indeed. Dual_EC was a NOBUS backdoor relying on the ECDLP. That's fair.
My point was more that it looked suspicious at the time (why use a trapdoor in a CSPRNG) and at least the possibility of "escrow" was known, as evidenced by the fact that Vanstone (one of the inventors of elliptic curve cryptography) patented said backdoor around 2006.
This suspiciousness simply doesn't apply to ML-KEM, if one ignores one very specific cryptographer.
Not op, but they probably meant https://en.wikipedia.org/wiki/Dual_EC_DRBG
The problem with standardizing bad crypto options is that you are then exposed to all sorts of downgrade attack possibilities. There's a reason TLS1.3 removed all of the bad crypto algorithms that it had supported.
There were a number of things going on with TLS 1.3 and paring down the algorithm list.
First, we both wanted to get rid of static RSA and standardize on a DH-style exchange. This also allowed us to move the first encrypted message in 1-RTT mode to the first flight from the server. You'll note that while TLS 1.3 supports KEMs for PQ, they are run in the opposite direction from TLS 1.2, with the client supplying the public key and the server signing the transcript, just as with DH.
Second, TLS 1.3 made a number of changes to the negotiation which necessitated defining new code points, such as separating symmetric algorithm negotiation from asymmetric algorithm negotiation. When those new code points were defined, we just didn't register a lot of the older algorithms. In the specific case of symmetric algorithms, we also only. use AEAD-compatible encryption, which restricted the space further. Much of the motivation here was security, but it was also about implementation convenience because implementers didn't want to support a lot of algorithms for TLS 1.3.
It's worth noting that at roughly the same time, TLS relaxed the rules for registering new code points, so that you can register them without an RFC. This allows people to reserve code points for their own usage, but doesn't require the IETF to get involved and (hopefully) reduces pressure on other implementers to actually support those code points.
TLS 1.3 did do that, but it also fixed the ciphersuite negotiation mechanism (and got formally verified). So downgrade attacks are a moot point now.
You're not accurately representing DJB's concern.
His concern is that NSA will get vendors to ship code that will prefer ML-KEM, which, not being a hybrid of ECC and PQC, will be highly vulnerable should ML-KEM turn out to be weak, and then there's the concern that it might be backdoored -- that this is a Dual_EC redux.
My professors at Brown were walking on QR lattice cryptography well before 1997, although they may not have been publishing much - NTRU was in active development throughout the mid 1990s when I was there. Heating up by 1997 though, for sure.
I guess that would have been Silverman etc? That's true there was NTRU before reductions were shown. Good call.
> In context, this particular issue is that DJB disagrees with the IETF publishing an ML-KEM only standard for key exchange.
No, that's background dressing by now. The bigger issue is how IETF is trying to railroad a standard by violating its own procedures, ignoring all objections, and banning people who oppose it.
They are literally doing the kind of thing we always accuse China of doing. ML-KEM-only is obviously being pushed for political reasons. If you're not willing to let a standard be discussed on its technical merits, why even pretend to have a technology-first industry working group?
Seeing standards being corrupted like this is sickening. At least have the gall openly claim it should be standardized because it makes things easier for the NSA - and by extension (arguably) increasing national security!
The standard will be used, as it was the previous time the IETF allowed the NSA to standardize a known weak algorithm.
Sorry that someone calling out a math error makes the NIST team feel stupid. Instead of dogpiling the person for not stroking their ego, maybe they should correct the error. Last I checked, a quantum computer wasn't needed to handle exponents, a whiteboard will do.
ML-KEM and ML-DSA are not "known weak". The justification for hybrid crypto is that they might have classical cryptanalytical results we aren't aware of, although there's a hardness reduction for lattice problems showing they're NP-hard, while we only suspect RSA+DLog are somewhere in NP. That's reasonable as a maximal-safety measure, but comes with additional cost.
Obviously the standard will be used. As I said in a sibling comment, the US Government fully intends to do this whether the IETF makes a standard or not.
Worth noting this concern over as-yet-undiscovered cryptanalytic techniques also applies to Bernstein's preferred SNTRUP.
Except when the government starts then mandating a specific algorithm.
And yes. This has happened. There’s a reason there’s only the NIST P Curves in the WebPKI world.
"The government" already have. That's what CNSA 2.0 means - this is the commercial crypto NSA recommend for the US Government and what will be in FIPS/CAVP/CMVP. ML-KEM-only for most key exchange.
In this context, it is largely irrelevant whether the IETF chooses or not to have a single-standard draft. There's a code point from IANA to do this in TLS already and it will happen for US Government systems.
I'd also add that personally I consider NIST P-Curves to be absolutely fine crypto. Complete formula exist, so it's possible to have failure-free ops, although point-on-curve needs to be checked. They don't come with the small-order subgroup problem of any Montgomery curve. ECDSA isn't great alone, the hedged variants from RFC 6979 and later drafts should be used.
Since ML-KEM is key exchange, X25519 is very widely used in TLS unless you need to turn it off for FIPS. For the certificate side, the actual WebPKI, I'm going to say RSA wins out (still) (I think).
Yes: because it took forever for curves to percolate into the WebPKI (as vs. the TLS handshake itself), and by the time they did (1) we had (esp. for TLS) resolved the "safe curves"-style concerns with the P-curves and (2) we were already looking over the horizon to PQ, and so there has been little impetus to forklift in a competing curve design.
While it's true that six others unequivocally opposed adoption, we don't know how many of those oppose the chairs claiming they have consensus. This may be a normal ratio to move forward with adoption, you'd have to look at past IETF proceeding to get a sense for that.
One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others. Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
> One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others.
I don't have context on this other than the linked page, but if what he's saying is accurate, it does seem pretty damning and corrupt, no? Why all the lies and distortions otherwise - how does one assume a generous explanation for lies and distortions?
> I don't have context on this other than the linked page, but if what he's saying is accurate, it does seem pretty damning and corrupt, no?
It's complicated. You'd have to know the rules and read the list archives, and make up your own mind. DJB might be overselling it, so you really do have to check it yourself. I think the WG chair had enough cover to make the call they made. What _I_ would have done is do a WG consensus call on the underlying controversial question once the controversy started, separate from the consensus call on adopting the work item. But I'm not the chair.
To which "underlying controversial question" are you referring?
> One other factor which comes in to play, some people can't stand his communication style. When disagreed with, he tends to dig in his heels and write lengthly responses that question people's motives, like in this blog post and others. Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
The IESG though is completely mishandling it. They could discipline him if need be (posting bans for some amount of time) and still hear the appeal. Instead they're sticking their fingers in their ears. DJB might be childish and annoying, but how are they that much better?
> Accusing the chairs of corruption may have influenced how seriously his complaint was taken.
If you alter your official treatment of somebody because they suggested you might be corrupt (in other words, because of personal animus), then you have just confirmed their suggestion.
So all someone who is being abusive has to do to force me to be stand there and be abused by them is to call me corrupt?
No, because in this hypothetical you have some authority to discipline that someone. That's what's going on here: DJB is calling out people in the IETF leadership -- people who can dole out posting privileges bans and what not. DJB is most likely going to skirt the line and not go over it, which is what's really tricky here, but the IESG could say they've had enough and discipline him. The trouble is that the underlying controversy does need to be addressed, so the IESG doesn't have completely free hand -- they can end up with a PR problem on their hands.
> So all someone who is being abusive has to do to force me to be stand there and be abused by them is to call me corrupt?
In this example, rectifying concerns is your job, so yes, you have to do it, even if 1 of the 7 parties who hold the concern is a jerk*. Officials can't dispense with rules and procedure just because their feelings are hurt.
If you are actually corrupt**, it isn't abuse. If you aren't, it still isn't abuse. Even if it is abuse, and you deal with it sanctions, you must still rectify the substance of the concerns upheld by 6 other parties.
* 1/7 would be a pretty desirable jerk/total ratio, in my experience
** (and officially behaving differently based on personal animus makes one so)
20+2 (conditional support) versus 7.
22/29 = 76% in some form of "yea"
That feels like "rough consensus"
> That OMB rule, in turn, defines "consensus" as follows: "general agreement, but not necessarily unanimity, and includes a process for attempting to resolve objections by interested parties, as long as all comments have been fairly considered, each objector is advised of the disposition of his or her objection(s) and the reasons why, and the consensus body members are given an opportunity to change their votes after reviewing the comments".
From https://blog.cr.yp.to/20251004-weakened.html#standards, linked in TFA.
To add to this: rough consensus is defined in BCP 25 / RFC 2418 (https://datatracker.ietf.org/doc/html/rfc2418#section-3.3):
The goal has never been 100%, but it is not enough to merely have a majority opinion.And to add to that, the blurb you link notes explicitly that for IETF purposes, "rough consensus" is reached when the Chair determines is has been reached.
Yes, but WG chairs are supposed to help. One way to help would have been to do a consensus call on the underlying controversy. Still, I think the chair is in the clear as far as the rules go.
The standard used in the C and C++ committees is essentially a 2-to-1 majority in favor. I'm not aware of any committee where a 3-to-1 majority is insufficient to get an item to pass.
DJB's argument that this isn't good enough would, by itself, be enough for me to route his objections to /dev/null; it's so tedious and snipey that it sours the quality of his other arguments by mere association. And overall, it gives the impression of someone who is more interested in derailing the entire process than in actually trying to craft a good standard.
Standards - especially security-critical ones - shouldn't be a simple popularity contest.
DJB provided lengthy, well-reasoned, and well-sourced arguments against adoption with his "nay" vote. The "aye" votes didn't make a meaningful counter-argument - in most cases they didn't even bother to make any argument at all and merely expressed support.
This means there are clearly unresolved technical issues left - and not just the regular bikeshedding ones. If he'd been the only "nay" vote it might've been something which could be ignored as a mad hatter - but he wasn't. Six other people agreed with him.
Considering the potential conflict of interest, the most prudent approach would be to route the unsubstantiated aye-votes to /dev/null: if you can't explain your vote, how can we be sure your vote hasn't been bought?
So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. Now, I'm a passionate supporter of this feature, for various reasons, that I can (and have, in the committee) brought up. And I know some people who are against this feature, for various reasons that have been brought up. And at the end of the day, it kind of is a popularity contest because weighing an argument of "based on my experience, this is going to be confusing for users" versus "based on my experience, this is not going to be confusing for users" is just a popularity contest among the voters on the committee, admittedly weighted by how much you trust the various people.
And then there's a third category of person (really, just one person I think, though). This is responsible for the vast majority of the email traffic on the topic. They're always ready with a detailed point-by-point reply of any replies to their posts. And their argument is... um... they don't like the feature. And they so don't like the feature that they're hanging on to any scintilla of a process argument to make their displeasure derail the entire feature, without really being able to convince anybody else of their dislike (or being able to be convinced to change their mind to any argument).
Now I don't have the cryptographic chops to evaluate DJB's arguments myself. But I also haven't seen any support for his arguments from people I'd trust to be able to evaluate them. And the way he's responding at this point reminds me very much of that third category of people, which is adversely affecting his credibility at this point.
The really big difference between named loops and cryptography is that if one gets approved and is bad, a couple new programmers get confused, while with the other, a significant chunk of the internet becomes vulnerable to hacking.
Just because a feature is standardized does not mean it gets implemented. This is actually even more true for cryptography than it is for programming language specifications.
The situation is actually somewhat the opposite here: the code points for these algorithms have already been assigned (go to https://www.iana.org/assignments/tls-parameters/tls-paramete... and search for draft-connolly-tls-mlkem-key-agreement-05) and Chrome, at least, has it implemented behind a flag (https://mailarchive.ietf.org/arch/msg/tls/_fCHTJifii3ycIJIDw...).
The question at hand is whether the IETF will publish an Informational (i.e., non-standard) document defining pure-MLKEM in TLS or whether people will have to read the Internet-Draft currently associated with the code point.
> Just because a feature is standardized does not mean it gets implemented.
This makes no sense. If you think it actually had a high chance of remaining unimplemented it anyway then why not just concede the point and take it out? It sure looks like you're not fine with leaving it unimplemented, and you're doing this because you want it implemented, no? It makes no sense to die on that hill if you're gonna tell people it might not exist.
Also, how do you just completely ignore the fact that standards have been weakened in the past precisely to achieve their implementation? This isn't a hypothetical he's worried about, it has literally happened. You're just claiming it's false despite history blatantly showing the opposite because... why? Because trust me bro?
> So there's a controversial feature added in C2y, named loops, that has spawned many a vociferous argument. (...it) is just a popularity contest
Thankfully cryptography design isn't programming language design, what we have here neither is nor should be a debate or contest over popularity, and the costs of being wrong are enormously different between the two, so you can just sleep easy knowing that your experience doesn't extrapolate to the situation at hand.
I marvel at your ability to consider cryptographers as inhuman machines, despite evidence to the contrary.
You are turning “consensus” into “majority” and those it not the same.
There was a recent discussion within the C committee over what exactly constituted consensus owing to a borderline vote that was surprisingly ruled "no consensus" (and the gravitas of the discussion was over the difference between a "no" and an "abstain" vote for consensus purposes). The decision was that it had to be a ⅔ favor/(favor + against), and ¾ (favor + neutral) / (favor + against + neutral). These are the actual rules of the committee now for determining consensus. Similar rules exist for the C++ committee.
If there is any conflation going on, I am not the one doing it.
We're talking about a landmine in a crypto spec and you're bikeshedding about consensus ratios.
We should talk about the NSA designed landmine.
Have you implemented MLKEM? How well do you understand it?
A consensus is 100%. A rough consensus should be near 100%. 2/3 is a super majority. That's a very different standard.
See https://news.ycombinator.com/item?id=46035639
A consensus isn’t always 100%
A consensus is by definition 100%. You can redefine the word in a specialist context, but that is what the word means.
Within the IETF it’s not 100%.
See section 3.3 of one of their RFCs for proof.
https://www.rfc-editor.org/rfc/rfc2418.html#section-3.3
“ Working groups make decisions through a "rough consensus" process. IETF consensus does not require that all participants agree although this is, of course, preferred. In general, the dominant view of the working group shall prevail. (However, it must be noted that "dominance" is not to be determined on the basis of volume or persistence, but rather a more general sense of agreement.) Consensus can be determined by a show of hands, humming, or any other means on which the WG agrees (by rough consensus, of course). Note that 51% of the working group does not qualify as "rough consensus" and 99% is better than rough. It is up to the Chair to determine if rough consensus has been reached.”
That's "rough consensus" as opposed to "consensus".
And that’s what the IETF uses but djb doesn’t like.
It's literally the ethos of the IETF going back to (at least) the late 1980s, when this was the primary contrast between IETF standards process vs. the more staid and rigorous OSI process. It's not usefully up for debate.
He doesn't like it at least in part for lacking a concrete definition. Attempting to pin down what it means or ought to mean is therefore useful.
It's always a mistake to look at numbers for consensus, without also considering how strongly the positions are held.
consensus is not a synonym for majority, supermajority, or for any fraction of the whole, unless the fraction is 100%
You may misunderstand how the IETF works. Participation is open. This means that it is possible that people who want the work to fail for their own reasons rather than technical merit can join and attempt to sabotage work.
So consensus by your definition is rarely possible given the structure of the organization itself.
This is why there are rough consensus rules, and why there are processes to proceed with dissent. That is also why you have the ability to temporarily ban people, as you would have with pretty much any well-run open forum.
It is also important to note that the goal of IETF is also to create interoperable protocol standards. That means the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
DJB regularly acts like someone who is attempting to sabotage work. It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published. They regularly resort to personal attacks when they don't get their way, and make arguments that are non-technical in nature (e.g. it is NSA sabotage, and chairs are corrupt agents). And this behavior is self-documented in their blog series.
DJB's behavior is why there are rules for how to address dissent. Unfortunately, after decades DJB still does not seem to realize how self-sabotaging this behavior is.
> the work in question is a document describing how to apply ML-KEM to TLS in an interoperable way. It is not a discussion of whether ML-KEM is a potentially risky algorithm.
In my experience, the average person treats a standard as an acceptable way of doing things. If ML-KEM is a bad thing to do in general, then there should not be a standard for it (because of the aforementioned treatment by the average person).
> It is clear here that they _are_ attempting to prevent a description of how to use ML-KEM with TLS 1.3 from being published.
It's unclear why trying to prevent a bad practice from being standardized is a bad thing. But wait, how do we know whether it's a good or bad practice? Well, we can examine the response to the concerns DJB raised: Whether the responses satisfactorily addressed the concerns, and whether the responses followed the rules and procedures for resolving each of those concerns.
> They regularly resort to personal attacks when they don't get their way
This is certainly unfortunate, but 6 other parties upheld the concerns. DJB is allowed to be a jerk, even allowed to be banned for abusive behavior IMO, however the concerns he initially raised must nonetheless be satisfactorily addressed, even with him banned. Banning somebody is sometimes necessary, but is not an acceptable means of suppressing valid concerns, especially when those concerns are also held by others who are not banned.
> DJB's behavior is why there are rules for how to address dissent.
The issue here seems to be that the bureaucracy might not be following those rules.
France and Germany propose hybrid schemes as well:
The german position:
https://www.bsi.bund.de/SharedDocs/Downloads/EN/BSI/Publicat...
"The quantum-safe mechanisms recommended in this Technical Guideline are generally not yet trusted to the same extent as the established classical mechanisms, since they have not been as well studied with regard to side-channel resistance and implementation security. To ensure the long-term security of a key agreement, this Technical Guideline therefore recommends the use of a hybrid key agreement mechanism that combines a quantum-safe and a classical mechanism."
The french position, also quoting the German position:
https://cyber.gouv.fr/sites/default/files/document/follow_up...
"As outlined in the previous position paper [1], ANSSI still strongly emphasizes the necessity of hybridation1 wherever post-quantum mitigation is needed both in the short and medium term. Indeed, even if the post-quantum algorithms have gained a lot of attention, they are still not mature enough to solely ensure the security"
Standardizing a codepoint for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.
He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.
You can see him do this on the mailing list: https://mailarchive.ietf.org/arch/browse/tls/?q=djb
> standardizing a code point (literally a number) for a pure ML-KEM version of TLS is fine. TLS clients always get to choose what ciphersuites they support, and nothing forces you to use it.
I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.
> He has essentially accused anyone who shares this view of secretly working for the NSA. This is ridiculous.
He comes with historical and procedural evidence of bad faith. Why is this ridiculous? If you see half the submitted ciphers being broken, and lies and distortions being used to shove the others through, and historical evidence of the NSA using standards as a means to weaken ciphers, why wouldn't you equate that to working for the NSA (or something equally bad)?
> I think the whole point is that some people would be forced to use it due to other standards picking previously-standardized ciphers. He explains and cites examples of this in the past.
If an organization wants to force its clients or servers to use pure ML-KEM, they can already do this using any means they like. The standardization of a TLS ciphersuite is besides the point.
> He comes with historical and procedural evidence of bad faith. Why is this ridiculous?
Yes, the NSA has nefariously influenced standards processes. That does not mean that in each and every standards process (especially the ones that don't go your way) you can accuse everyone who disagrees with you, on the merits, of having some ulterior motive or secret relationship with the NSA. That is exactly what he has done repeatedly, both on his blog and on the list.
> why wouldn't you equate that to working for the NSA (or something equally bad)?
For the simple reason that you should not accuse another person of working for the NSA without real proof of that! The standard of proof for an accusation like that cannot be "you disagree with me".
> The standard of proof for an accusation like that cannot be "you disagree with me".
How is that the standard he's applying, though? Just reading his post, it's clearly "you're blatantly and repeatedly lying, and distorting the facts, and not even addressing my arguments". Surely "you disagree with me" is not an accurate characterization of this?
Let's invert that thinking. Imagine you're the "security area director" referenced. You know that DJB's starting point is assumed bad faith on your part, and that because of that starting point DJB appears bound in all cases to assume that you're a malicious liar.
Given that starting point, you believe that anything other than complete capitulation to DJB is going to be rejected. How are you supposed to negotiate with DJB? Should you try?
To start with, you could not lie about what the results were.
Your response focuses entirely on the people involved, rather than the substance of the concerns raised by one party and upheld by 6 others. I don't care if 1 of the 7 parties regularly drives busloads of orphans off a cliff, if the concerns have merit, they must be addressed. The job of the director is to capitulate to truth, no matter who voices it.
Any personal insults one of the parties lobs at others can be addressed separately from the concerns. An official must perform their duties without bias, even concerning somebody who thinks them the worst person in the world, and makes it known.
tl;dr: sometimes the rude, loud, angry constituent at the town hall meeting is right
Sunlight is the best disinfectant. I see one group of people shining it and another shading the first group.
Someone who wants to be seen as acting in good faith (and cryptography standards folks should want this), should be addressing the substance of what he said.
Consensus doesn't mean "majority rule", it requires good-faith resolutions (read: not merely responses like 'nuh-uh') to the voiced concerns.
I understand you are smart and are talking about things above my paygrade, but dang can you format the text on your site so it is easier to read please
uhhh... that's mostly on your browser. The css is at the top and pretty skimpy. If it really bothers you, find a styler extension that will override the CSS to render it more pleasingly.
Perhaps related: from 2022, on his (FOIA?) lawsuit against the government:
* https://news.ycombinator.com/item?id=32360533
From 2023, "Debunking NIST's calculation of the Kyber-512 security level":
* https://news.ycombinator.com/item?id=37756656
D. J. Bernstein is very well respected and for very good reason. And I don't have firsthand knowledge of the background here, but the blog posts about the incident have been written in a kind of weird voice that make me feel like I'm reading about the US Government suppressing evidence of Bigfoot or something.
Stuff like this
> Wow, look at that: "due process".... Could it possibly be that the people writing the law were thinking through how standardization processes could be abused?"
is both accusing the other party of bad faith and also heavily using sarcasm, which is a sort of performative bad faith.
Sarcasm can be really effective when used well. But when a post is dripping with sarcasm and accusing others of bad faith it comes off as hiding a weak position behind contempt. I don't know if this is just how DJB writes, or if he's adopting this voice because he thinks it's what the internet wants to see right now.
Personally, I would prefer a style where he says only what he means without irony and expresses his feelings directly. If showing contempt is essential to the piece, then the Linus Torvalds style of explicit theatrical contempt is probably preferable, at least to me.
I understand others may feel differently. The style just gives me crackpot vibes and that may color reception of the blog posts to people who don't know DJT's reputation.
> I don't know if this is just how DJB writes,
He's caustic, but often right.
It's very simple.
ECC is well understood and has not been broken over many years.
ML-KEM is new, and hasn't had the same scrutiny as ECC. It's possible that the NSA already knows how to break this, and has chosen not to tell us, and NIST plays the useful idiot.
NIST has played the useful idiot before, when it promoted Dual_EC_DRBG, and the US government paid RSA to make it the default CSPRNG in their crypto libraries for everyone else... but eventually word got out that it's almost certainly an NSA NOBUS special, and everyone started disabling it.
Knowing all that, and planning for a future where quantum computers might defeat ECC -- it's not defeated yet, and nobody knows when in the future that might happen... would you choose:
Option A): encrypt key exchange with ECC and the new unproven algorithm
Option B): throw out ECC and just use the new unproven algorithm
NIST tells you option B is for the best. NIST told you to use Dual_EC_DRBG. W3C adopted EME at the behest of Microsoft, Google and Netflix. Microsoft told you OOXML is a valid international standard you should use instead of OpenDocument (and it just so happens that only one piece of software, made by Microsoft, correctly reads and writes OOXML). So it goes on. Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
LWE cryptography is probably better understood now than ECDH was in 2005, when Bernstein published Curve25519, but I think you'll have a hard time finding where Bernstein recommended hybrid RSA/ECDH key exchanges.
> Standards organisations are very easily corruptable when its members are allowed to have conflicts of interest and politick and rules-lawyer the organisation into adopting their pet standards.
FWIW, in my experience on standardization committees, the worst example I've seen of rules-lawyering to drive standards changes is... what DJB's doing right now. There's a couple of other egregious examples I can think of, where people advocating against controversial features go in full rules-lawyer mode to (unsuccessfully) get the feature pulled. I've never actually seen any controversial feature make it into a standard because of rules-lawyering.
What exactly are you calling "rules-lawyering"? Is citing rules and pointing out their blatant violation "rules-lawyering"? If so, can you explain why it is better to avoid this, and what should be done instead?
As an outsider I'd understand it differently: reading rules and pointing out their lack of violation (perhaps in letter), when people feel like you violated it (perhaps in spirit), is what would be rules-lawyering. You're agreeing on what the written rules are, but interpreting actions as following vs. violating them.
That's quite different from an accusation of rules violation followed by silence or distortions or outright lies.
If someone is pointing out that you're violating the rules and you're lying or staying silent or distorting the facts, you simply don't get to dismiss or smear them with a label like "rules-lawyer". For rules to be followed, people have to be able to enforce them. Otherwise it's just theater.
Thank you, that seems to be the whole ball game for me right there. I understood the sarcastic tone as kind of exasperation, but it means something in the context of an extremely concerning attempt to ram through a questionable algorithm that is not well understood and risks a version of an NSA backdoor, and the only real protection would be integrity of standards adoptions processes like this one. You've really got to stick with the substance over the tone to be able to follow the ball here. Everyone was losing their minds over GDPR introducing a potential back door to encrypted chat apps that security agencies could access. This goes to the exact same category of concern, and as you note it has precedent!
So yeah, NSA potentially sneaking a backdoor into an approved standard is pretty outrageous, and worth objecting to in strongest terms, and when that risk is present it should be subjected to the highest conceiveable standard of scrutiny.
In fact, I found this to be the strongest point in the article - there's any number of alternatives that might (1) prove easier to implement, (2) prove more resilient to future attacks (3) turn out to be the most efficient.
Just because you want to do something in the future doesn't mean it needs to be ML-KEM specifically, and the idea of throwing out ECC is almost completely inexplicable unless you're the NSA and you can't break it and you're trying to propose a new standard that doesn't include it.
How is that not a hair on fire level concern?
I understand the cryptography and I agree with his analysis of the cryptographic situation.
What I don't understand is why -- assuming he thinks this is important -- he's chosen to write the bits about the standardization process in a way that predisposes readers against his case?
He’s smart and prolific, for sure, but I lost respect for him several years ago.
Why, if I might respectfully ask?
Sure! First, while I’m in no position to judge cryptographic algorithms, the success of cha-cha and 25519 speak for themselves. More prosaically, patriecia/critbit trees and his other tools are the right thing, and foresighted. He’s not just smart, but also prolific.
However, he’s left a wake of combative controversy his entire career, of the “crackpot” type the parent comment notes, and at some point it’d be worth his asking, AITA? Second, his unconditional support of Jacob Appelbaum has been bonkers. He’s obviously smart and uncompromising but, despite having been in the right on some issues, his scorched earth approach/lack of judgment seems to have turned his paranoia about everyone being out to get him into a self-fulfilling prophecy.
I do not understand your last paragraph. :/
https://medium.com/@hdevalence/when-hell-kept-on-payroll-som...
Thank you! I had no idea.
[flagged]
Dear some seasoned cryptographer,
Please ELI5: what is the argument for including the option for the non-hybrid option in this standard? Is it a good argument in your expert opinion?
My pea brain: implementers plus options equals bad, newfangled minus entrenched equals bad, alice only trust option 1 but bob only have option 2 = my pea brain hurt!
More of a person with IETF participation experience than as a cryptographer (I enjoy watching numbers dance but am not a choreographer):
This ( https://datatracker.ietf.org/doc/draft-ietf-tls-mlkem/ ) is a document describing how to use the ML-KEM algorithm with TLS 1.3 in an interoperable manner.
It does not preclude other post-quantum algorithms from being described for use with TLS 1.3. It also does not preclude hybrid approaches from being used with TLS 1.3.
It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.
There is no MTI (mandatory-to-implement) once these are documented from the IETF directly, but there could be market and regulatory pressures.
My suspicion is that this is bleed-out from a larger (and uglier) fight in the sister organization, the IRTF. There, the crypto forum research group (CFRG) has been having discussions on KEMs which have gotten significantly more heated.
A person with concern that there may be weaknesses in a post quantum technique may want a hybrid option to provide additional security. They may then be concerned that standardization of non-hybrid options would discourage hybrid usage, where hybrid is not yet standardized and would likely be standardized later (or not at all).
The pressure now with post quantum is to create key negotiation algorithms are not vulnerable to theoretical post quantum computer attack. This is because of the risk of potentially valuable encrypted traffic being logged now in the hopes that it could later be targeted by a post-quantum computer.
Non-negotiated encrypted (e.g. just using a static AES key) is already safe, and signature algorithms can be updated much closer to viable attacks to protect transactional data.
> It is however a document scoped so it cannot be expanded to include either of those things. Work to define interoperable use of other algorithms, including hybrid algorithms, would be in other documents.
FYI, the specification for hybrid MLKEM + ECC is ahead of this document in the publication process. https://datatracker.ietf.org/doc/draft-ietf-tls-ecdhe-mlkem/
Handforth Parish council Internet edition. You have no authority here, djb! No authority at all
tl;dr DJB is trying to stop the NSA railroading bad crypto into TLS standards, the objections deadline is in two days, and they're stonewalling him
This /. story fills in the backstory: https://it.slashdot.org/story/25/11/23/226258/cryptologist-d...
The NSA has railroaded bad crypto before [1]. The correct answer is to just ignore it, to say "okay, this is the NSA's preferred backdoored crypto standard, and none of our actual implementations will support it."
It is not acceptable for the government to be forcing bad crypto down our throats, it is not acceptable for the NSA to be poisoning the well this way, but for all I respect DJB, they are "playing the game" and 20 to 7 is consensus.
[1] https://en.wikipedia.org/wiki/Dual_EC_DRBG
[dead]
[dead]
[flagged]
”No association” and “I am not a representative” are quite different things to say.
[flagged]
An employee doesn’t act as an official representative of their employer nor do they speak for the employee in any official capacity. That is what the message says.
The informal also didn’t cloak their identity (implies some malicious intent), they simple did not use their work email. Nothing wrong with that.
I’m sorry, can you state which organization you are speaking for with this comment? It wasn’t immediately clear.
[flagged]
@dang, can we establish a rule that NSA apologists should not be doxxing HN members for the sin of advocating against the NSA's preferred narratives and worldview?
Deliberate personal breaches of privacy against HN members as a response to the contents of their speech like this stifle free discourse to the highest degree possible and should be banned or at least harshly admonished, no?
It's not really "doxing" when the public username they chose to use is their actual name, leading directly to their github profile, and their arguing that you always represent your employer, even if you "cloak" yourself in an alternate name.
Saying that it is a "breach of privacy" when the relevant details are being advertised by the person in question is silly.
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
[flagged]
That's not what the message you linked claims at all. Maybe you missed the "in this message" at the end of the sentence?
No not really - I don’t think choosing to post from an alternative email removes the association issue that the original intent is trying to capture.
What is your agenda?
[flagged]
> This is why djb is in the Cypherpunks Hall of Fame! [1]
This is a list made by you 2 weeks ago?
EDIT: Okay lol. I actually browsed the list and found multiple dubious entries, along with Trump!
Hilarious list. 10/10.
what do you expect, when the tagline at the end of the page says "In crypto we trust."?
Honestly, it's a bit sad. There are many great people on that list, but some seem a bit random and some are just straight up cryptobros, which makes the whole thing a joke, unfortunately
Name calling, bullying (forms of systematic harassment) and attempting to instill feelings of social isolation in a target are documented techniques employed by intelligence agencies in both online and offline discourse manipulation / information warfare.
You can read up more here if you are curious: https://www.statewatch.org/media/documents/news/2015/jun/beh...
Many of the attacks against djb line up quite nicely with "discredit" operational objectives.
Very nice document, I have to say. I was surprised that they care so much about hacktivists.
Are the strategies you mention actually in the document? It seems like one particularly focused on very soft tactics.
Don't forget the ever popular CIA Simple Sabotage Field Manual: https://www.cia.gov/static/5c875f3ec660e092cf893f60b4a288df/...
Bully and systematic harassment of cryptographers too build in backdoors too there encryption systems has been there go to strategy since the 80's.
Can you please stop spam-submitting this AI-generated Hall of Fame website? It's against HN guidelines to use the website primarily for promotion and it's clearly what you're doing here.