I used TextSecure as my text client exclusively for about the last 6 months and it worked great except for 3 things: group messages and MMS. Something changed recently that prevented any MMS from showing up and it kept offering for me to configure the proxy settings. I looked briefly for the correct settings but there don't seem to be any for Verizon on their list. Group messages just didn't seem to work the way the do in the normal Android client. Texting worked flawlessly though it's sort of chicken and egg - no one I know used it so none of my communication was encrypted in transit.
Oh and the third thing: my SO thought I was borderline paranoid/crazy/hiding something for even installing it.
Looks like we need to add some new MCC/MNC values to the defaults for that MMSC. Group messages use MMS, so hopefully this gets you down to one problem (paranoid). =)
I really appreciate the response, Moxie. I admire your work and your privacy advocacy and I downloaded TextSecure after seeing it here and associated with your name. I'll take another look and see if that link can solve my (technical) issues.
Just noting that when Google switched to using the Hangout app for SMS/MMS messaging, it broke MMS for me - so that problem is hardly unique to TextSecure
TextSecure crashed on receiving MMS on my Nexus 5 on T-mobile's value plans(I mention their value plan because it works slightly differently than their regular ones). After I manually configured the MMSC it stopped crashing and displays MMS correctly.
This was my exact usage problem with it as well. It was seamless in receiving standard SMS, but MMS & group messages simply did not come through, though I would be alerted to receiving them.
I gave up on TextSecure because of the flaw. I'll pick it back up if there's a fix.
There was a period of time where Android made a sudden unannounced change to an app's ability to access APN settings on the device, which prevented MMS from working.
We now prompt users to configure their MMSC for their carrier if APN details are not available from the device.
If you have problems like this in the future and would like to help the project get better, please file the bugs you encounter on the GitHub issue tracker so that we can get the information we need to fix them for everyone.
I am really looking forward to TextSecure for iOS. I hope I am wrong on this one, but from the text on their Website Heml.is doesn't seem to be too eager to open source their code after release.
I don't know any details about whispersystems (except that moxie marlinspike is with them) but I sure do hope they can provide a well designed cross platform messaging app completely open source (which I don't think exists yet)
"We have all intentions of opening up the source as much as possible for scrutiny and help! What we really want people to understand however, is that Open Source in itself does not guarantee any privacy or safety. It sure helps with transparency, but technology by itself is not enough."
They have no intention of releasing the source code. Use https://www.surespot.me/ instead, it does the same stuff, already exists, and is released under the GPL (v3).
Surespot depends on a bunch of google play services and is officially distributed on google play. Is there a way to install pre-compiled surespot apk outside of google play for those that don't install proprietary google code on their phone? I noticed that the open-source android apk repository F-Droid can't distribute it for this reason: https://f-droid.org/forums/topic/surespot-encrypted-messagin...
(Side note: moxie prohibits TextSecure on F-Droid as there is no forced auto update like google play. I currently have to download and compile the TextSecure source code myself, which is no biggie, but as a CM user, I'm definitely excited about this integration!)
Personally, I've used both, but settled on SureSpot for the moment. SureSpot uses data exclusively, which is cheaper than SMS for me. Although I understand that TextSecure now has (or will be getting soon) a data channel. So I'll definitely take another look.
Moxie has proven himself to be more than capable of building such a system, but the author of SureSpot seems more than competent too. See the section titled "Technical Overview" on:
Interesting fact: TextSecure wasn't made open source until it was bought by Twitter: https://dev.twitter.com/blog/whispers-are-true - IIRC, prior to this the website claimed it was open source, but offered no way of getting the source, and if you asked for it, you would find out it was only given to trusted third parties to perform security reviews.
Sounds like a trap to me, not having the source code. Maybe it works for now but investing trust as things are now I'd rather go with opensource client.
Trust must be earned, so far it they brag about way they made tech work with a patched version of android - they don't really put forth anything that will give them credibility as a very secure protocol.
People insist on looking at this through their default prism of "closed source bad, open source good". But people with crypto experience have other prisms; for instance, "competent, well-vetted crypto" versus "amateur enthusiast crypto". Sometimes open source is also competent and well-vetted, but vetting is expensive, and there is a lot of amateur crypto out there.
> Sometimes open source is also competent and well-vetted, but vetting is expensive, and there is a lot of amateur crypto out there.
You seem to be implying that one must be a hobbyist in order to write incompetent crypto software with no or incompetent review and tend to need company resources to get quality code reviews.
Having crypto is often an important checkmark and tack on for shipping a product and usually no one in the product group is competent to analyze the security of the way they tacked on encryption. If a few in the larger company are competent, they will avoid reviewing these projects. Being the engineer everyone associates with delays and frustrations doesn't do much for you and there will never be any proof of the costs you may have prevented.
The few better than I know how to criticize implementations that I have seen haveusually had considerable cross company and university involvement. That usually means open source or a lot of NDA and complex license agreements for cross organization code sharing.
I have no idea what you're trying to say here, but just a random stab at responding: my perspective in this discussion comes from managing a consulting practice that, among a few other things, specializes in assessing the security of cryptographic implementations.
I've been in a role of evaluating security vulnerabilities on security products and features from many different origins..
All I am saying is that I am in a position to estimate ~9/10 of everything critically exceeds the competence of its authors to safely combine features and security. So a primary explanation for failure that only applies to 40%(60%?) of the market doesn't sound right to me.
So either we disagree considerably on proportion of software that is poorly implemented or you are saying the majority of commercial software is also written by hobbyists?
I stifled the urge to say the same thing, but then realized that I'd lose the evening to defining what "mainstream" meant, after people dredged up random examples of snake oil from Schneier blog posts; not to mention the inevitable rehashing of the "beware custom algorithms and 390244 bit keys" thread, which is going to have to happen now because bringing up crypto truisms from the late-90s makes people feel smart.
You're being unfair and you know it. Lavabit, the RSA fiasco, Apple's imessage crypto, etc. are all perfectly mainstream examples of closed-source crypto done wrong. As you said yourself, the only thing that conclusively makes a difference is if the crypto is "well vetted," having the source available is simply a means of making this easier. Classifying the quality of crypto-implementations based on the source model alone ("The track record of open source cryptography is bad.")is just disingenuous.
No, I'm not being unfair. I don't think "open source" versus "closed source" has much at all to do with how secure a cryptosystem is. I do think having Trevor Perrin and Moxie Marlinspike working on your crypto design has a lot to do with how secure a cryptosystem is.
Yes, you are being unfair. You can't say I don't think "open source" versus "closed source" has much at all to do with how secure a cryptosystem is (somewhat agreed) while simultaneously saying The track record of open source cryptography is bad (utter nonsense and misleading), unless your point is that closed source cryptography has an equally bad record.
Please explain for those of us who are not good enough in the field (I'm genuinely asking).
I was under the impression that software like GnuPG and OpenSSL could be considered safe, so seeing a security professional warning about a negative track record of open source cryptography is worrisome.
What exactly should we be careful of when it comes to open source cryptography?
Not all open source code is broken; just a lot of it is. I think tptacek is trying to say that open source vs closed source is a mediocre predictor of the quality of a cryptosystem :)
The TextSecure server never has access to message bodies. However, the government could force the server admins to provide timestamps and metadata for all messages. The TextSecure server knows who* and when you talk to someone, but not the contents of the conversation.
* The who in this case is a hash of the recipient's phone number. I'm not sure how difficult it is to turn this value into a real phone number.
Awesome, I've been waiting for this. Now CyanogenMod should be the most secure OS out there against snooping, even compared to Google's own Android. Too bad Google isn't taking steps to offer end-to-end encrypted communication for Android devices.
That's just a (egregious) indication of the poor security: Even if it were fixed I would have no confidence in the security of CM, barring drastic changes.
In case anyone else is wondering: if sender and recipient use CM or TextSecure, the encrypted messages are not sent via GSM SMS. The transport uses Procotol Buffers, HTTP and Google Cloud Messaging/Apple Push Notifications.
This looks awesome and definitely makes me lean more towards an Android OS for a future phone.
This is not a dig, but because the SMS system is SO transparent a user may not be able to tell which of their messages / contacts allow encrypted traffic (based on the screenshot in the post). I might add a lock or some other mechanism to indicate which messages are secured.
They are planning on adding that soon (from the post):
We will also be adding some minimal visual feedback to the stock
CyanogenMod Messaging app to indicate when the user has an
expectation of privacy and when they don’t, but the base
experience won’t change at all.
This is truly great work by Moxie, CyanogenMod devs and everyone else who may have contributed to this project. Kudos guys/gals!
One important implementation detail question that comes to mind is "How does the system detect and fix the issue of key exchange errors?"
While using the TextSecure app from the Play store, I've experienced a situation twice where a key exchange would have to be re-initiated manually after a friend and I got out of sync (he was receiving my messages garbled in TextSecure). I imagine it's possible for this to happen in the built-in Cyanogenmod version, and I don't see any documentation specifically addressing it. Without visual notification of a "secured" connection, the user could end up inadvertently sending plain-text messages.
Can someone explain how the keying system works? What is the secret information a user needs to decrypt messages addressed to them? What prevents a 3rd party from decrypting those messages? What is the 'key'?
You generate a keypair. They generate a keypair. You swap public keys. Then you encrypt messages to each other using the other persons public key.
Of course, that is still vulnerable to an active MITM attack where somebody intercepts the initial key exchange and inserts their own keys. The app has a built in option to display your fingerprints so you can compare them if you meet the other person.
Even with this vulnerability, imagine if everyone started using it overnight... All of a sudden there wouldn't be millions (billions?) of new private messages stored in a bunch of databases every day. The telcos aren't going to perform an active MITM attack to decrypt peoples SMS.
If there are a lot of people start using this, I imagine some entities can easily force all the traffics through a system that will handshake on both sides and intercepting all the contents.
The first time you send an encrypted message keys are exchanged automatically. Thereafter it tells you if the second party's identity changes from the first message.
If you're thinking "But MitM!!!" then don't. The main weakness of this is actually the plausibility of losing your phone, and hence encryption keys. Unless perhaps they are stored on Google/WhisperSystems' servers. If not it would open you up to this weakness: "Hey, ignore the security warning for this text - I got a new phone so the keys changed. Remind me again what our secret terrorist plan is?"
This is how I always thought Google would eventually implement an iMessage-like protocol. By taking the last step before sending the SMS out, and checking to see if the recipient is also part of the service, and sending it over the service instead of through the open. Love it, just hope my HTC One S will still work with a nightly. ;P
Doesn't the fact that mobile phones have an extra closed-source baseband OS that can control the phone on a lower level than the secondary OS (Android) make any attempt at securing the secondary OS pointless? I mean, the baseband might have a keylogger and send all your data to your provider anyway...
I'm currently working on an application similar to this, but with a small physical device you plug into the bottom of your phone where all the encryption is done, so there is no central software you can break into, it's all done physically, disconnected. You communicate the decryption keys to the parties in person. We want to make this a little device you can attach to a key chain and plug into the bottom of your phone whenever you need encryption. Our app interfaces with the dongle and you can use it to encrypt/decrypt any files really.
Is this a retarded idea or is there a use for this?
I don't know how it will do as a niche thing in enterprise, but I think it's very unlikely it will ever catch-on with with mainstream users. It just seems too much of a hassle. Now turn that into an NFC ring of sorts that does the encryption for you, and you may be on to something, but even then it still seems something only geeks would use. Heck, it's hard enough to get people to use software-based encryption solutions.
Someone can always hack the display driver - you have to display the messages to the user at some point, after all. Take a screenshot on whatever user event, encryption bypassed.
I've been thinking about encryption all the way up to the display module, though, meaning interception would have to be very close to the display hardware itself.
That's what I was thinking. We are going to ship two versions. One being a simple dongle where the application used to interface with it is the users phone.
We're also selling a premium package, where the device is much bigger but includes a physical display and keyboard, with a transfer mechanism of the final encrypted message.
We're marking up the higher end model so we can fund the lower end model, being 2 replacements of the dongle for 1 purchase, as the dongle will have a one time authentication and will be locked to the device.
We're also trying to figure out how to have the device self destruct if not used by any approved devices, meaning whipe itself clean and POSSIBLY break the hardware that does the processing/houses any data.
Print an overlay that acts as a visual password. You put it on the phone screen. So you can read what's written, but an attacker who captures only phone data sees only gibberish.
There are various practical issues. Maybe they can be overcome. If the overlay was generated on demand on the dongle, rather than printed, that could improve usability.
Can anyone see this as viable? My co founder and I are very very passionate about this project, and the last thing we want to do is poor in years into something that won't see the light of day!
I did not read anything about what current users may need to do. I've been using TextSecure for some time now and _just_ got a phone that was worthy of putting CyanogenMod on.
So I did.
Did my eyes gloss over the details or is there some method of importing current TS databases I may need to know about?
Oh and the third thing: my SO thought I was borderline paranoid/crazy/hiding something for even installing it.