I'm more than surprised about the hostile tone some contributors seem to use when they talk about LibreSSL lately.
LibreSSL is an OpenSSL fork done by the OpenBSD team primarily because they don't think OpenSSL is the right software to include in their OS. That's their decision, if you don't use OpenBSD, you don't have to care. They have done an insane amout of work in pretty short time, and since that work might benefit the larger OS community, they kindly decided to start work on a portable version, which you don't have to use.
Even if you don't use LibreSSL, you might still benefit from their work as there is a healthy collaboration between OpenBSD/LibreSSL and Google/Adam Langley/BoringSSL.
Now, there's a first preview release of portable LibreSSL, and nitpicks are used to demonstrate how supposedly incapable the OpenBSD team must be. They hardcode -Werror, they obviously don't know how to write a configure script. They don't provide a PGP signature for the preview release, they obviously don't know how to distribute software securely. They use Comic Sans, they can't be taken seriously at all.
If you think LibreSSL will benefit you personally, you might consider showing a little gratitude. If you don't think LibreSSL is of any use to you, why do you even bother to write about it?
OpenBSD can handle the hostile tone, they aren't fragile feels about criticism. The whole point is to release this early so they (and other contributors, like Adam from Google) can get feedback so it's good people are trying to port it and offering opinions however if anybody wants to constructively criticize the project or make recommendations they have to post on the mailing list because I doubt any of them will respond to some random guy's wordpress site. If you just throw up a site and blast them you're accomplishing nothing except link bait. They have a good reason to use -Werror and would explain why if this was posted to openbsd-tech
I honestly didn't pick up on any hostility in the tone of this article. There was no insulting, for instance. It was a simple one-by-one listing of (seemingly minor) mistakes along with corrections for those mistakes.
I think, in order to find hostility in that, you'd have to be looking for it.
I think they've earned that right, and this attitude may be necessary to scare away the kind of developers who might (with perfectly good intentions) end up making the job of the OpenBSD developers a lot harder.
OpenBSD is amazing software. By far the best OS I've ever used in my life. If the cost of that is a bad attitude, so be it. Whatever they're doing, it's working.
> OpenBSD is amazing software. By far the best OS I've ever used in my life. If the cost of that is a bad attitude, so be it.
I like to judge a developer's attitude by reading the manpage that he or she kindly wrote for me. OpenBSD manpages are comprehensive and still concise. So the devs respect my time. I really appreciate that. (Actually, I want to throw money at them just for providing such excellent Unix documentation.)
Yes, they tend to be brusque when you mail them about some issue and obviously didn't RTFM before. They are right: You show them that you don't respect their time. Why should they be nice to you?
There's a social policy that one must adhere to when interacting with groups within certain "geekdom" domains. I learned it while idling on IRC EFNet in #Linux, #LinuxHelp in the early 2000's. To an outsider it comes off as harsh and dismissive but It really does weed out the random noobs/script kiddies who refuse to read a man page or put any effort into solving a problem themselves. I learned how to exhaust all possible search options before posting and how to formulate a question properly, by providing clear and concise examples of the problem resulting in less followup questions and quick solutions to my issues.
They aren't: -Werror is doing exactly what it is supposed to, keeping a tacit assumption from subtly causing bugs. sysctl is necessary because there is no randomness system call, and in chroots /dev/random is not available.
Don't think anyone said that. Detailed, constructive feedback is always appreciated.
On the other hand, when using free software from other people, it does seem a little rude to demand anything from them. Kind of like "gee, thanks for the free cake, but I prefer my icing on the side, waahhhh sob sob sob [ throws toys from the pram ]"
> -Werror is hardcoded in the configure script, which is a very bad idea, and the opposite of portable.
using -Werror is a guarantueed build break whenever the build is tried on a system the original developer had no access to.
I think that is exactly the point; if the thing does not build, people are going to complain loudly and things are going to get fixed. Warnings are usually just run-time problems waiting to happen, so they may as well be considered bugs.
C is not the same as other languages. Many possible errors reported by the compiler really are not bugs. You've probably heard of "-Wall" and "-Wextra". Why does -Wextra include even more warnings than -Wall? Because they're more likely to include truly spurious warnings. C is both more simple and more flexible than other languages, and it's very hard for the compiler to tell when the code does something the writer didn't intend.
As a trivial example, -Wunused will warn when a function has a parameter but doesn't use it. But you do this whenever you need to provide a callback that really doesn't care about one of the parameters. Still, -Wunused is helpful to find errors elsewhere in the code, so in these cases you might do something like
unused_param = unused_param;
just to make that one compiler warning go away. (That might work for one version of a compiler, but not another!) In other cases, the error the warning catches is so much more uncommon than the spurious warnings, you might disable that one warning type for your project. The linux kernel does this for a couple of them.
Anyway, the best way is to enable as many warnings as you can, and have the discipline to fix the ones you see, without forcing yourself or anyone with -Werror. In any non-trivial codebase, you guarantee that the next major release of gcc will not be able to build your code if you use -Werror, due to some truly spurious warnings. Again, this isn't because gcc sucks, just because the problem of guessing where what you wanted is different from what you did is extremely difficult in C.
> Anyway, the best way is to enable as many warnings as you can, and have the discipline to fix the ones you see, without forcing yourself or anyone with -Werror.
Seeing as how easy it is to zap the -Werror from the configure script, I don't think that anyone is being forced to use it. However, making it the default helps avoid the opposite scenario where a growing amount of warnings (some potentially critical!) whizz past and nobody gives a shit.
So, the fact that warnings are enabled, they break built for some people, these people fix the issue and/or shout about it on the Internet (preferrably bugs@), it all is exactly what needs to happen. The problems get noticed and fixed this way. They don't just pile up. Yes it can be annoying, yes there are some stupid warnings -- ideally there'd be a flag -Wuseful-warnings. Yes people are free to zap the -Werror if they don't care about these warnings. Hopefully they know what they are doing because it really is possible that a bug in LibreSSL or in their system headers for example is calling for attention.
Consider how much discussion there was around goto fail and the like -- about the fact that static analysis (or smart compilers) would've caught these things. Why didn't they listen to the compiler?! So passing -Werror is one way towards making sure people look at the issues.
While many of these warnings can be annoying during development they are also very useful if the code is kept clean of them. In this case the __bounded__ attribute is a security feature their compiler version has. If another platform can not support this feature then revoking this specific instance of this warning is an active decision someone should make when writing the build scripts. Just ignoring all warnings is certainly not the way to go.
The language often (always?) has facilities to remove those warnings on a case by case basis. For example when you don't want to use a parameter you can actively let the compiler know without assigning the variable to itself: you can only include the type and not the name:
int fn(int, void*);
int fn(int num, void* /*extra*/) {
// If the name extra is commented out the compiler will
// not warn that you are not using it. Now it is very
// clear that not using this variable was an active choice
// and not a mistake.
return num;
}
edit: as pbsd pointed out commenting out extra is not portable C code, though I believe the wider point still stands. These warnings can be very useful and should be be reviewed before ignoring them.
That's C++, not C. In C, the parameter name must be specified in a definition. I tried with 7 compilers, and none of them allow it as an extension, either, as they all consider it a fatal error.
In addition, correctness and security should always be prioritized ahead of portability. It does no good for software to be portable if that just means it's incorrect and insecure on more platforms.
so if the libressl developers rip out all their dubious entropy generation methods in favor of /dev/urandom on linux it might be well worth switching to it.
/dev/urandom is the favored entropy gathering method. But if you can't open it (not there, rlimit restriction, etc.) it falls back to the bobo code. If the linux kernel provided a random number source that was reliable and could not fail, this wouldn't be an issue.
How is it a good thing to fall back to non sufficient security? The only good fallback is falling back to crash and clean up. If libressl can't find enough entropy then it should give up on it.
I won't argue that point since I generally agree, but ask yourself this:
0. sshd will fork() and chroot() into /var/empty. After the fork(), you can't use the entropy you have because it's shared with the parent (i.e., it's not "entropic").
1. Where should it get entropy from?
2. Where does OpenSSL get entropy from in this case?
Anyway, I don't even work on portable libressl because I don't want to deal with shit like this, but I think the quoted text erroneously gives the impression that /dev/urandom isn't used. I wanted to correct that impression.
With OpenSSL, if you know you're going be chrooting, you can explicitly seed the internal PRNG with a call to RAND_poll() before you chroot, avoiding the need to open /dev/urandom once you've chrooted. (Similarly, you're supposed to call RAND_poll() to re-seed after forking because there's no safe way to detect that you've forked. Of course, if you fork while in a chroot you're screwed.)
I really think that LibreSSL's RAND_poll() should have similar behavior to ensure maximum compatibility with OpenSSL and to provide a means to use chroot() safely without the risk of falling back to the "bobo" code. (Also, a means of safely reseeding after a fork on systems without minherit(MAP_INHERIT_ZERO)).
(Incidentally, libsodium has a similar API: you can explicitly call randombytes_stir() to cause the library to open (and keep open) a file descriptor to /dev/urandom, so subsequent calls to the PRNG work in a chroot.)
Yes, that's what you're supposed to do, and it sucks. A good library should provide you with more than a box full of hammers and thumbs; it should actually help you and not just punt whenever a hard decision shows up. The RAND interface was one of the first things gutted.
The presence or need for a stir() function should be considered a serious design flaw.
(forks are detected by calling getpid() if you don't have inheritzero.)
> sshd will fork() and chroot() into /var/empty. After the fork(), you can't use the entropy you have because it's shared with the parent (i.e., it's not "entropic").
What's wrong with generating some random numbers using the parent's entropy pool before fork and using that as the child's entropy pool?
> Pseudo-random byte sequences generated by RAND_pseudo_bytes() will be unique if they are of sufficient length, but are not necessarily unpredictable.
It does seem kind of odd to criticize the release for having -Werror on by default and also for having a fallback if /dev/urandom is unavailable.
In the former case they are sacrificing portability for increased confidence of correctness, and in the latter they are sacrificing confidence or correctness for increased portability.
If you don't try to cope with the lack of a /dev/random, you get shit from people. If you try to cope with it, you get shit from people. While I would agree that the fallback entropy gathering is very very hacky and ugly, the difference is that it sure as hell tries harder than OpenSSL ever did. I'm not qualified to say whether the things it uses for entropy are truly any good for it, but it sure looks like it wouldn't be terribly easy to predict all these bits without having compromised the system.
I thought the libressl devs thought it was a mistake to even try to fall back -- the code should use OS-provided random numbers, and if they're not there, give up. If so, I'm a little surprised to hear libressl is trying harder than openssl.
Ideally, that's how it handled. In fact that's how it's handled on OpenBSD. Getentropy either works or you're screwed. As it turns out, there are other systems (hello Linux, etc.) where you don't have such a reliable way to source entropy. I don't know how common it is to encounter this in reality but sadly it looks like it might be quite common indeed, though I hope I'm wrong. See the point about daemons chrooting into /var/empty for example.
You can disable the fallback code with a define, so if your distributors are sure your system should always be able to provide a good entropy source in normal use, they'll flip that switch.
LibreSSL is lacking features such as ALPN and they've removed many constants, changed function definitions in subtle ways and modified header include dependencies. The result of this is that it definitely isn't a drop in replacement for OpenSSL. Then again OpenSSL usually isn't a drop in replacement for OpenSSL between versions either so they aren't doing a terrible job.
Most of these could be easily worked around with a few #ifdefs but they've also managed to make that a bit problematic by reusing the OPENSSL_VERSION_NUMBER macro without providing some sort of complementary IS_LIBRESSL flag. Fortunately OpenSSL hasn't hit version 2 yet so the version numbers don't overlap at all.
The author tried building the first release on Sabotage Linux, an experimental distro, and reported on what broke. That may be valid; I hadn't heard of the distro before. They also talked about how entropy was being gathered incorrectly; this is possible, as I thought it's kind of a preview release, but I'm inclined to listen to the OpenBSD guys first.
Yes, -Werror is normally going to break things badly and cause far too much unnecessary work... for most projects. There are a handful of projects, on the other hand, that I would argue -Werror is absolutely necessary. Crypto libraries such as openssl/libressl/gnutls and tools like gnupg are at the top of that list. This list might also include key-handling utils such as {gpg,ssh}-agent and maybe pinentry.
Breaking on new GCC features is a good thing, because for these important packages you shouldn't ever be guessing about the programmer intention or assuming that some new warning is safe.
Several people brought up -Wunused. We already know about that warning, and so libressl should expect it and compile cleanly. Yes, this might be annoying at times, but cleaning up the code was the goal anyway. What about future versions of GCC? There are only a few possibilities:
0) The warning actually is about an important bug.
Obviously you don't want the build in this case.
1) Some new -W flag was added.
Broken build are important here. The GCC authors probably added that flag for a reason, and you can't guarantee[1] the warning is a false-positive.
2) No flags have changed, but some other component has caused
a warning where there wasn't one previously.
This means something else changed:
2a) A function prototype changed. (does it even compile properly?)
2b) Some defined type or macro changed. (could easily be a new bug)
Yes, in many cases, these are probably trivial. The point is that for some software, forcing someone to actually check is the goal. The problems with openssl that were recently exposed by heartbleed was that nobody was actually checking security-critical components, and simply assuming those checks were being done by somebody else.
With -Werror, the fact that it doesn't compile will force someone to either fix some bug or silence the warning by adding the necessary cast or #ifdef or whatever. Really, I have to wonder about anybody who advocates for allowing unchecked builds: why are you ok with the kind of unchecked code that lead to heartbleed and many other security problems? As DJB[2] and PHK[3] both warned: are you trying to prevent a high-security environment?
[1] Why can't we guarantee such things? Because answering that would req1uire solving the Halting Problem.
gcc routinely introduced new warnings that break correctly working software's build if they use -Werror. For example, Gentoo had a lot of problems with this in the past, as do other packagers, it's just that Gentoo users are more exposed to build failures (upgrade gcc, rebuild tree, watch the fun).
I still don't see the problem, warnings usually indicate problems with the code, even if that particular sort of warning was added in a new version of the cc. If they don't indicate actual problems then the compiler is broken, but I still don't think that's a problem since they can usually be silenced in some non-intrusive way. Also you shouldn't be trying to build software if you don't know how to report a bug or have a buggy toolchain (so maybe a lot of Gentoo users shouldn't be using Gentoo).
Compiler warnings can come down to things that are purely style, like unused variables. It's more than likely that an unused variable could sneak itself into a program, not be found for a few years, then suddenly trigger a warning when a compiler added the check (of course, compilers have been checking for this for a long time, but this is just an example).
In the case of Gentoo, this would manifest itself as packages compiling cleanly with one compiler version, then suddenly lots of packages failing to compile because the build processes were stopped due to the, now reported, unused variable. If these were just warnings, then the packages would still compile, someone would notice (or even the original dev), and the problem can be fixed. Note that before the compiler upgrade, there was no bug - the program worked fine.
Except for a case such as libressl (preview release so they can get comments), having -Werror hardcoded in the build process clearly makes no sense.
Sorry, I still don't believe that's a problem. Unused variable warnings can be silenced easily if they aren't real bugs (for example an unused parameter in a callback). Most people don't have to deal with this stuff because they use precompiled binaries. The people that do deal with this should know how to fix it properly.
It's already necessary far too much of the time to manually back and slash C code to get it to build in an untested or newer environment, without adding additional cases over purely stylistic compiler complaints. Precompiled binaries aren't available for every system for which one might want to use portable software.
LibreSSL is an OpenSSL fork done by the OpenBSD team primarily because they don't think OpenSSL is the right software to include in their OS. That's their decision, if you don't use OpenBSD, you don't have to care. They have done an insane amout of work in pretty short time, and since that work might benefit the larger OS community, they kindly decided to start work on a portable version, which you don't have to use.
Even if you don't use LibreSSL, you might still benefit from their work as there is a healthy collaboration between OpenBSD/LibreSSL and Google/Adam Langley/BoringSSL.
Now, there's a first preview release of portable LibreSSL, and nitpicks are used to demonstrate how supposedly incapable the OpenBSD team must be. They hardcode -Werror, they obviously don't know how to write a configure script. They don't provide a PGP signature for the preview release, they obviously don't know how to distribute software securely. They use Comic Sans, they can't be taken seriously at all.
If you think LibreSSL will benefit you personally, you might consider showing a little gratitude. If you don't think LibreSSL is of any use to you, why do you even bother to write about it?