If you don't try to cope with the lack of a /dev/random, you get shit from people. If you try to cope with it, you get shit from people. While I would agree that the fallback entropy gathering is very very hacky and ugly, the difference is that it sure as hell tries harder than OpenSSL ever did. I'm not qualified to say whether the things it uses for entropy are truly any good for it, but it sure looks like it wouldn't be terribly easy to predict all these bits without having compromised the system.
I thought the libressl devs thought it was a mistake to even try to fall back -- the code should use OS-provided random numbers, and if they're not there, give up. If so, I'm a little surprised to hear libressl is trying harder than openssl.
Ideally, that's how it handled. In fact that's how it's handled on OpenBSD. Getentropy either works or you're screwed. As it turns out, there are other systems (hello Linux, etc.) where you don't have such a reliable way to source entropy. I don't know how common it is to encounter this in reality but sadly it looks like it might be quite common indeed, though I hope I'm wrong. See the point about daemons chrooting into /var/empty for example.
You can disable the fallback code with a define, so if your distributors are sure your system should always be able to provide a good entropy source in normal use, they'll flip that switch.
ie falling back on braindead methods when sane ones failed.