Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If we had multiple equally used SSL implementations of equal quality and used in equal proportions then a vulnerability in one would clearly not have the impact of a vulnerability that affects everything. (Do I seriously have to spell that out? Is it not obvious?) You are arguing from the position that it would be impossible to approach OpenSSL quality if resources were split across multiple implementations, but this event has demonstrated that the 1000 eyes theory is nonsense. That bug wasn't there a short time before being found, it was there for years.

It seems obvious that this is going to spawn a Rust/Ocaml/similar reimplementation of SSL to me. The reason it's damning is this is only going to happen because the wider community now deems it necessary, when really it should have been a measure that the eternally loud security contingent took on proactively.



"If we had multiple equally used SSL implementations of equal quality and used in equal proportions"

Then either 1) each would be of substantially lower quality than if the same development effort had been focused on one, or 2) you've pulled a bunch of smart developers off of other projects.

Biological metaphors are interesting, but software faces different constraints.


Is that actually true though?

The big truism of software development is that motivated small teams can do proportionately much more than the same people would if they're all thrown in one big group on one project. The whole mythical man month at work.


It's not even true that open source SSL is a monoculture. There are two critically important open source TLS implementations, not one (NSS and OpenSSL), and that's not counting SecureTransport (Apple's open-source implementation).

I don't think there's a single part of your argument that really survives scrutiny. Your central point is false, as are its premises. To the extent that a large portion of the Internet uses one TLS implementation, that's often been as helpful as harmful. And the "monoculture" you're decrying is counterfeit: you seem to think OpenSSL is the only credible option, but Chromium and Firefox disagree with you.


Aren't two of the three implementations you're mentioning mainly used in clients though? Most Linux boxes are not used as clients, and so there is a monoculture of Linux server security.

You, rightly, mentioned NSS elsewhere, but do people actually use this on servers in any great number? I guess you could argue that Apache and Nginx shipping OpenSSL as the default option for https is the problem, in which case shouldn't we change that, or is there something else about OpenSSL that prevents people from using NSS?


NSS implements both the client and server side of TLS. And OpenSSL and NSS aren't the only options; if X.509 bugs are more your style, you could try PolarSSL or MatrixSSL:

https://twitter.com/tqbf/status/454022864660750336/photo/1


Fill in the blank: 50% of the things being compromised is ______% as bad as all the things being compromised.

Sometimes a partial compromise is easy to deal with, and the number in the blank is way less than 50. Lots of diverse things is good.

Sometimes half the units being compromised is almost as bad as all the units being compromised. Lots of diverse things is a bad thing in this environment.


It's absolutely true. To the degree that your objection holds (which isn't negligible), it places some limits on the return possible from 1. 2 still fully applies. I'm also not sure what the limits in 1 are, when what we need is more careful code and better analysed code, rather than more code. "Find problems" shouldn't involve tremendous amounts of communication overhead.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: