I also wouldn't normally expect a problem here, but this actually happened to me yesterday. Some 3rd-order transitive dependency resulted in trying to download a file from https://rubygems.org, rather than http. Apparently my SSL certs were out of date (on both dev and prod) and so I had to spend a few hours Googling how to make RVM + Ruby Gems update my certs.
It's really easy to write these kinds of things off as rare occurrences, but this is exactly the kind of thing that causes us to talk about "fear" when deploying.
Edit: solution was to run 'rvm osx-ssl-certs update all' on OSX, which is unsettling, as it installs certs all over the disk. Then on Ubuntu it was "yum update openssl", I believe.
-- Edit: parent was deleted; the suggestion was about a solution to use http instead of https. --
Security nerd mode activated; solutions like this make me a little twitchy, even when I have to employ them myself.
At the risk of stating something you already know, for the sake of pedantry the security implications of this fix are (at least) as follows:
- If you're checking the signatures of the packages you're downloading, this is probably OK, since even if an attacker spoofed your DNS to route to her own package archive, she would still have to compromise the package signing key to run her code on your system. On top of that, if you're using a hosting/PAAS provider, she'd have to compromise their DNS infrastructure first as well.
- If you're not checking package signatures, then hopefully your system doesn't have any "interesting" information (including username/password combinations that might be useful on your or other sites). The hosting/PAAS provider DNS system is still a barrier, but now you're down _two_ of the protections on the chain of code executing in your name.
As always, there are multiple-order-of-magnitude differences in the amount of effort any given element of security is worth; the above fix might be just fine for 99% of applications, while for the remaining 1% some extra thought would be worthwhile. TBH I have no idea how common such "code hijacking" attacks are in practice -- if any "real" security professionals have that info, I'd be curious to hear your thoughts.
Offered in the spirit of helping folks with managers asking "why can't we just turn off SSL?"
If it happens even one time while you are in the middle of trying to resolve some "emergency", then it has happened one time too many. Customers don't care that you weren't thoughtful enough to keep a mirror of everything you were using.
I remember my first brush with this when Maven first started getting popular. Apache had to move their servers to a new datacenter and the server that had the drives for the main maven repo was lost in transit. It took five days to bring everything back up. During that whole time, nobody could build anything unless they had local copies.
Ever since then, I'm pathological about ensuring that I have a local mirror of all my dependencies.
I'm pretty sure that wasn't the point - however infrequently it occurs, it violates the principle that only changes introduced since the previous build could have made it fail.
In practical terms, this means that you never know whether change X is faulty or simply the world is temporarily faulty.
(Of course, this may or may not be not useful to you to know.)
True, but a build could fail arbitrarily fail for any unexpected reason. Pragmatism is important, I'd avoid adding an inconvenient step to my build/deploy process just to
mitigate an infrequent edge case.
Maybe my brain is wired differently, but having to tell a new hire "Oh yeah sometimes the build randomly fails and we just press retry until it works" would just feel kinda embarrasing to me.
That's not what I said. The point is one might as well say it never happens for how often it happens in practise. I have only had it happen once in recent memory, and even then it was because the hosted CI we use was experiencing issues.
This happens very infrequently in my experience, but it's simple on most CI servers to restart a build.