I can't comment on which encryption schemes are still strong, but one advantage to using protection for general use is providing strength in numbers. Right now, using Tor or strong encryption is a bright beacon saying "this person has something to hide!" and the NSA et al hone in by default.
If, however, using Tor and strong encryption was the norm, it is easier for those who need it - whistle blowers, dissidents, etc - to hide underneath the regular noise.
edit: keep getting flagged for spam! Perhaps I have the wrong keywords?
This is why I heretically believe that bad crypto may in some sense be preferable to no crypto, unless you really have something to hide (eg. you're a defense contractor or an opponent of a tyrannical government).
That won't work because Tor is too slow. I'd use only if i was a terrorist - it's too slow even for pedophiles. If we want people to use crypto en masse, we have to give them something faster.
Perhaps it's been a while since you've last used Tor, but at least in my experience, the speed really isn't that bad, especially for tasks that aren't time critical. Speeds of hundred or several hundred KB/s are not uncommon. The latency is a couple of seconds at most, in most cases. You won't be streaming in HD over Tor, sure, but fetching your email over Tor is not significantly more tiresome than over a direct connection. There is usually no noticeable lag for instant messaging either. YouTube video streams in the Tor browser also works with no delays most of the time.
The Internet services who are complaining should start coding, instead, or in addition to complaining. Secure key exchange, secure real time communication, secure storage, and secure email payload would blind the surveillance state. All surveillance states.
Even if you think your own surveillance state is less than harmful, there are dozens of others which are patently evil or thoroughly corrupt. And yet, while we do business all over the planet, even business cloud services are almost all unencrypted.
Meeting this moral obligation is something Yahoo, Google, and others could make much easier. At some point, we have to ask why not?
Google has done more to securely encrypt Internet traffic than any other company in the world. Among other things, they are the pioneering standard-bearer for ECC forward secrecy and for certificate pinning, the two most important Internet encryption advances in the last 10 years.
Those things are very significant, much needed improvements, but would you agree that we need more user controlled, end to end crypto? I'm talking about things like ZRTP here. I'm not expecting solutions like that to come from Google because it has a vested interest in mining your data. Possibly even more so than NSA does.
You wrote "secure key exchange, secure real time communication, secure storage, and secure email payload would blind the surveillance state". Now I'm not sure we're working from the same definitions of those terms.
If you think Google, or any other consumer Internet service, has already secured those things, evidently not. What I have in mind is end-to-end encryption with no provisions for surveillance of cleartext, with or without a warrant. As I wrote earlier in this thread, even if you think our law enforcement can be trusted, there are plenty of jurisdictions where that is not the case at all.
I think I understand. Any acknowledgement that Google has done more to secure Internet traffic for normal users than any other company would require you to concede something, and thus feel bad.
Google has done an enormous amount to improve and deploy what Ben Adida called "b2c crypto" -- typically turning it on months to years ahead of its main competitors, and actively supporting work on making it stronger. But they've done almost nothing to encourage the use of "p2p crypto" in Adida's sense, at least not as a product feature.
I support the widespread routine use of p2p crypto, but I think Google deserves credit for what it has done. That includes making b2c crypto "routine" for most products, which does have direct effects on dragnet surveillance.
Of course Google deserves praise for that and 97 other things. But it is completely unapropos to what the article posted here is about. It does not really have an effect on dragnet surveillance because Google can be ordered to simply allow the government to let it happen.
There is some value in that foreign governments with their own NSA wannabes could get thwarted.
The problem is that Google would probably closed down if they implemented security which cannot be circumvented, even with a warrant, because they would break the law.
There are many laws and long standing rulings which force companies/people/.. to cooperate with law agencies. So, as it has been stated many times before, the solution for these things is political, not technical. Law always trumps programming and law is influenced by politics.
I see how you're attempting to pass off the obvious propaganda that Google and the NSA are separate entities, when Snowden's revelations clearly suggest they are one and the same.
Safari can't verify the identity of the website "blog.easydns.org".
The certificate for this website is invalid. You might be connecting to a website that is pretending to be "blog.easydns.org", which could put your confidential information at risk. Would you like to connect to the website anyway?
HTTPS Encryption is virtually useless without the identify verification part. Anyone can run a valid HTTPS server with a self-generated public key. Anyone could then place a MITM, and without the identity bit, you're just as compromised.
If we had dropped the identity bit, every ISP would be running a MITM proxy, because they want control. Already, plenty of businesses enable poor hygiene by including transparent squid proxies that strip SSL.
This is claimed every time the topic is brought up, and like the self-signed certificate scare-box in firefox, it does a significant amount of harm by forcing a choice between all or nothing. Which means many times, "nothing" will be chosen.
Encryption without authentication is still incredibly useful, and something that we've NEEDED to have happen pretty much everywhere. Some of the benefits are:
1) The ccost of and attack is significantly larger, by protecting against passive eavesdropping. You can capture massive numbers of passwords trivially by simply logging and unencrypted stream, but full MitM requires more time, effort, and resources to accomplish.
2) MitM attacks can sometimes be detected, either at the time of the attack or in the future due to repeated opportunities for the attack discovered. A passive scanner is impossible to discover, in most cases.
3) Any use encourages a culture of encryption, so the people who ARE authenticating properly aren't as obvious.
4) The cost of moving from a self-signed key to one signed by a 3rd party is small ("get your cert signed"). This makes the upgrade path to full, properly authenticated crypto much smaller.
All of these benefits are worthwhile, so please, stop encouraging the use of plaintext. The proper way of handling this is not to scare people away like Firefox currently does, but to encrypt everything automagically, so it works just like unencrypted HTTP, and only show the lock icon if full authentication has happened. That is, encrypting without auth should be used, but still presented to the user as a plain, unencrypted page.
I was familiar with the reasons why using self-signed SSL didn't buy you anything, but hadn't really put together how it would completely weaken the whole ecosystem until now. "Oh, that's just Verizon MITMing me like always..."
Self-signed certificates are perfectly useful. Weaker security is still better than no security at all. For example, even unauthenticated, SSL protects you from log analysis after the fact. MITM is not the only threat model.
If it's a personal page (say, a public-facing login to your NAS) you can always check the SSL fingerprint too. You don't have to remember the whole thing, just remember a few characters and check that they're right.
Sure, but browsers are perfectly right to throw up a giant red warning that says "THIS IS NOT OK". Otherwise MITMing will become commonplace, and users won't care.
The man in the middle can do the key exchange with you. (You seem to be assuming that you start with a connection that you know to be reaching the desired endpoint and that you can start the key exchange there. You can't. The MITM intercepts the initial connection and does the key exchange with you and then turns around and initiates a secure connection with the eventual endpoint. The endpoint sees valid, encrypted traffic and you see valid, encrypted traffic, but the MITM attacker has the full plaintext communication.)
That's why you need the identity portion to be tied to the key exchange.
Even without the CA system, SSL (and TLS) is secure against _passive_ MITM attacks; That is, if the attacker cannot alter the data-stream. This is still reasonable for many simple links, but for internet, it is not.
If you've got no verification of the opposite party, literally any computer could be on the other end. The key exchange mechanism is secure, certainly, but any computer can execute it. The attacker can even show you the real website just proxied through, but that's plenty enough to get your password or any other access that the attacker wants.
A "passive MITM" attack isn't MITM any more - it's just interception. MITM means that the man in the middle is able to control the data stream that both ends see.
Adding a single root to locally installed copies of a web browser is only really useful for 2 things: testing an SSL configuration for development, and deploying your own CA to all computers on an intranet to so you can MITM their traffic without throwing up warnings.
For a bit of fun, compare the warnings about a self-signed certificate in Firefox with those triggered by attempting to download a self-signed certificate with the CA:true bit set.
A 3rd thing, which I've done, is for a private site to collaborate with a small group of people I know in real life. You give them a hardcopy of your CA cert and then they can verify the details when adding it to their browsers.
As long as you're careful to really keep your private keys private; the root of trust gets shifted from a hierarchy of corporations to individuals you know and (personally) trust. I like this "real trust" model a lot more than the current one of a central authority and "being told who to trust", and I think all the advocates of "if you want to use HTTPS you should buy a cert from a CA" are missing this point.
When you hit an SSL site, the remote server (the one you're browsing) presents a number of certificates. One for the actual secure domain, and one or more for the CA that signed that certificate (sometimes more than one because the CA's have intermediate keys. A key which signed a key which signed... etc)
The CA itself is not a party to this exchange, they only provide the end product - a signed certificate.
If you trust the CA (in your browser config), you also trust by extension every certificate that CA has ever signed (barring revocation lists).
So this single root ploy is what large companies do to MITM their employees at work, I assume.
I'll have to investigate what you suggest I do for fun, because I haven't tinkered with these certs before. What flags do you pass to openssl?
The certificates do specify the issuer though. So you could use a self run CA in the manual arrangement to verify that your connection isn't being MITM'd, assuming you don't lose control of your self run CA, or leak its private key. Correct me if I am wrong.
I know I've heard about this somewhere before, but it seems to me somewhat a hassle to setup MITM attacks if you already have admin access over all the machines. Just put a browser plugin that logs everything and lock down the machines to they can't be messed with by non-admins. Corporate employees are generally not allowed to admin their own machines or expect any privacy on them - so there's no point in hiding the fact that you log everything.
If you're just trying to do a MITM for fun, though, there's nothing magical about the CA or cert. Once you configure the browser to trust a CA, then any cert signed with it will also be trusted by the browser. So if you are also running DNS on the network then you can re-route facebook.com or any domain to point to your own proxy server and serve up your fake cert. The user will see the nice green lock and everything. You could probably even name your CA and certs so that it looked pretty much exactly like a user would see it on the real facebook.com.
The reason why this isn't happening all over the place is just because you need to already have root access to the machine to configure the trusted CAs. The ability to add a CA, I would probably consider that to be a fully owned machine.
I'm pretty sure it happens, trustwave apparently issued a cert for these purposes and it's claimed other CA's have done the same. It's a hassle to do, but the goal is to detect and prevent corporate espionage. Most corporates have their own OS media - installing from outside sources without adding corporate security required software is forbidden.
I assume this is how the certs get on to the machines. A browser plugin would be more transparent, possibly defeating the purpose.
I remember reading about that trustwave incident, that is definitely messed up!
I was just really trying to say that adding a rogue CA to the browser trust list vs installing a plugin both require admin permission and are both "noticeable." So neither of them are really ideal for serious espionage. In which case they're only good for non-secret employee monitoring. So, in that case, might as well go with a plugin because it would be the simpler solution.
If you're talking about a compromised "root" CA like trustwave or something where a stock browser will trust fake certs - now you're talking about a technique suitable for espionage or black hat activities.
A browser plugin is really really obvious, whereas if you take a look at the CA's in firefox - there's hundreds. All you need is one subtly different from what's expected - barely noticeable.
I think you'll find the above product interesting. Apparently anti-virus vendors have similar programs - to prevent malware being downloaded over https behind a corporate proxy.
It seems that CDN's such as cloudflare and akamai take the websites SSL _private_ keys too.
This blog post is a fancy way of saying that cloudflare content serving customers now have the option of encrypting the link between cloudflare and them. Note that users can still be MITM'd at the cloudflare site - even with the new arragement.
Moxie introduced http://www.convergence.io/ a while ago now (https://www.youtube.com/watch?v=8N4sb-SEpcg), which could offer significant advantages over the PKI if it became widely adopted. Although it's not an end-all solution to identity, it's a step in the right direction (using web-of-trust ideas).
So convergence.io is only available over HTTP, yet it offers the plugin for download. It seems an ironically bad practice to install security software over an untrusted connection. Meanwhile, https://convergence.io yields a certificate for whispersystems.org and is in fact the https://whispersystems.org/ main page (nothing to do with convergence.io). There appears to be no secure way to obtain Moxie's plugin (it is not available from mozilla.org, though a fork is available, based on this repo: https://github.com/mk-fg/convergence). My understanding is that more recently, Moxie has been focusing on http://tack.io which offers improved security without overturning the CA model.
I'm not sure what would be more ironic, this, or it delivering a "secure" download over the protocol it intends to replace for not providing sufficient security guarantees. The other irony is that a self-signed cert is basically as good as any CA issued cert with Convergence on, if only there were a secure way to bootstrap it. Perhaps a secure way to obtain it would be to email Moxie using GPG and ask him to send you a copy, but as you suggest, it appears to be unmaintained aside from the fork anyway, so it isn't much more than research material for now. I hadn't heard of Tack yet, but that also appears to have a lack of activity.
Unfortunately this project seems to have stalled; according to https://github.com/moxie0/Convergence the code hasn't seen significant updates since 2011.
@hyaline9, you're shadow-banned for some reason, people without showdead on will not see your replies, and we can't reply to you. I'd probably make a new account given that it was only created recently.
On perepectives: Convergence is actually based on perspectives with some improvements regarding privacy and latency (which are discussed in the youtube video). Are you aware if any of those improvements have been adopted into perspectives or not?
Contrary to the Wikipedia page, there's a potential role for CAs. As an example, a CA could still sell a certificate authorising a key to, say, serve HTTPS data for the site foo.com; that key could then delegate authority for bar.foo.com, for www.foo.com and whatever else, without needing to go back cap-in-hand to the original CA.
Among the cool things is that a CA trusted to vouch for people serving data in .com wouldn't necessarily be trusted to serve data for .co.uk. One might have CAs vouching for one's ownership of IP addresses.
All of this was simple and straightforward, with a clean model (unlike the XPKI mess which conflates identity and authorisation), so of course it failed utterly.
OK. I'm prepared to agree with the headline in principle.
However, here's the deal/problem:
I am willing to encrypt outgoing mail only in cases where I can identify that the recipient are capable of decrypting it (with 0 friction at any stage).
It's (still) more important to me that my e-mail is read by the recipient, than that it's not read by any other party.
I'm working on this small project, it might be less friction, probably not zero. I'd be interested if you had any thoughts on how to improve it. It's using GPG for end to end encryption.
Transfer between servers is over SMTP, which is not encrypted by default and almost always subject to degradation attacks (the MITM can claim not to support STARTTLS). In any case, the user cannot verify or demand that STARTTLS be used and many hosts don't support it at all. Meanwhile, PGP does not require any trust of intermediate entities, but relatively few people you might want to email have PGP keys and even those with keys might not have them on the device from which they intend to read your email. S/MIME is in a similar position to PGP, with a CAs trust model instead of Web of Trust.
Note that this is indeed almost always, rather than actually always. My mail server is configured to refuse to send to Google (and a few other hand-picked sites) unless protected by SSL. The converse is, unfortunately, not yet true.
DANE[0] should allow me to put DNSSEC secured records in place to tell Google (et al) that they should always use SSL to talk to me too, but I don't have DNSSEC support for my domains yet and my version of Postfix isn't new enough to be able to use it for outgoing mail either.
More frustratingly, many popular email providers (for example, BT) don't support SSL at all. That lowers the bar for eavesdropping from active MITM to passive listening.
> But the idea that we are somehow "out of reach of the NSA" is definitely not one of them. Sure, we're not actively collaborating with them, as many US businesses are, but as we've said before: we just assume the pipes going into and out of our major network exchange points are being vacuumed en masse.
Maybe easydns isn't, but the Telcos are definitely collaborating. We've had intercept equipment directly under the control of CSIS installed in major datacenters since the early 2000s. (I really do mean CSIS, not CSEC.)
I've seen it myself and I have multiple sources with direct, first hand knowledge of it.
None of them are interested in coming forward though, and I have no proof to offer myself.
What would necessitate cooperation of easydns anyway? They can't possibly get transit or peer with anyone of significance in Canada that doesn't have the surveillance equipment installed, so I don't see why any of the spooks would bother contacting them.
If encrypting our communications were a moral imperative, wouldn't someone have suggested our telephone calls or postal mail be encrypted some time in the past 40 years? We've had the capability to do so as private citizens for years, but who the hell cares? And it's not like your bank is going to start sending your bank statements in the mail with a one-time pad.
If we didn't have to think about crypto, everyone would be using it. It is just not easy to ensure the secrecy and integrity of all your data. And Joe Blow does not really care so much about his security or integrity to go out of his way to use more crypto.
The title certainly makes an emotional appeal to me. However I could not find justification/explanation of any moral obligation in the text.
I could understand a moral obligation to fight unjust surveillance, but that is not what was presented. Why am I morally obligated to increase the cost of surveillance? If society accepts the unjust surveillance the only consequences of increasing surveillance costs are economic waste and most likely justifications for new encroachments on personal liberty.
I understand it as this : your moral obligation as a technical person is to protect your less computer-savvy users / kinfolks from being unjustly spied.
By increasing the cost of surveillance you force intelligence agencies to make a choice : given widespread enough encryption they would have to decrypt only the real threats and not ordinary citizens.
There is no reason to spy everybody unless you want to model your nation's citizen to be able to influence / control them.
It will become, if it is not already, a practical obligation. Why should anyone in the rest of the world trust American products that enable "justified" surveillance?
I am not really sure what any of your comment means and/or why you directed it to me? Is the practical obligation the same thing as financial incentive? And what does "enable 'justified' surveillance" mean? Existing in the world enables surveillance. I have no problem with just and legal surveillance, my problem is with unjust surveillance.
You have a moral obligation to make end-to-end encryption, authentication, key exchange and validation and so on so easy to use a fucking moron could manage it, or my Mother.
Then everybody implements crypto, and then we get another 20 articles on Hacker News about how "your crypto is wrong and broken and you're a terrible person!".
I'd be interested in joining the fight to hide information from the NSA if it didn't seem functionally impossible for folks who haven't made it into a career.
I have a web-app project for which encryption would be ideal.
However I have already a hard time finding an encryption capable database.
PostgreSQL has a pg-crypto but it looks like an after fought module.
Right now the best solution I can find is each user gets an encrypted SQLite database on my server.
But what happens when 2 or more users need to share data that are in their respective databases ?
I would probably self-identify as a libertarian (although I would be eager to qualify that), and I am very tired of statements about moral obligations.
Yes. I can not imagine any other conception of moral obligation. How would you come up with an obligation to do X or not do Y without an underlying belief structure?
I was asking about the second half of the sentence.
(But to answer your question in terms of what I am getting at, you would work backwards from what you were comfortable with and then tell yourself that it was your underlying belief structure, whether that were meaningfully true or not)
I see. I guess I took it for granted that everyone understood moral obligation to mean a commandment to act a certain way due to moral principles. Depending on how you define "comfortable with" we might be using different language to talk about the same thing. I am uncomfortable with immoral behavior? I do not think you are advancing moral relativism/nihilism?
I do not think most people use moral reasoning to guide (even semi-consitently) their actions. However I assumed that anyone who says "You have a moral obligation to do X" was speaking in a philosophical context and not using it as a euphemism or "Crypto is /<oo1."
Well, someone could be discussing in good faith, articulate a principle that they believe represents their morality, reason out a consequence of that principle and then observably act in a contradictory manner.
(now that I have written that, I guess part of what I am getting at is that the more often that actually happens, the more you have to discount the articulation of moral principles rather than discounting the people or the good faith. Personally, I'm pretty optimistic about people and pretty cynical about the things they say...)
'crypto', because typing 'cryptography' is too hard. This and 'cyber' grind my gears. Specially when politicians/opinion makers throw the terms around without really understanding what they mean.
I can't comment on which encryption schemes are still strong, but one advantage to using protection for general use is providing strength in numbers. Right now, using Tor or strong encryption is a bright beacon saying "this person has something to hide!" and the NSA et al hone in by default.
If, however, using Tor and strong encryption was the norm, it is easier for those who need it - whistle blowers, dissidents, etc - to hide underneath the regular noise.
edit: keep getting flagged for spam! Perhaps I have the wrong keywords?