Run an on-device scan against a hash database. Using a technology shown to have very frequent collisions.
And then they notify law enforcement if they get a hit. Which means even if you're innocent - all your devices get confiscated for months, you probably rack up tens of thousands of dollars in legal fees, maybe lose your job, probably lose your friends and get the boot from any social organizations or groups.
They're waiting for two things.
One, CSAM to get out of the news cycle and the furor among users about CSAM to die down. This is standard corporate PR "emergency" management practice.
Two, to slide it into a point release after some minor, inconsequential change to say they "listened to users." iPhones with auto-updates enabled won't automatically upgrade to a new major release, but they will happily automatically upgrade to a point release.
You can of course upgrade to iOS 15 and turn off auto-updates, but then you won't get security updates, like the people staying on iOS 14.
Stay on iOS 14 until Apple surrenders completely on this.
> Run an on-device scan against a hash database. Using a technology shown to have very frequent collisions.
Google and Microsoft have been scanning everything in your account against a hash database for the past decade.
Also, unlike Apple's system which doesn't even notify Apple of the first 30 positive results (to protect you from the inevitable false positives) Google and Microsoft offer users no such protection.
>then they notify law enforcement if they get a hit. Which means even if you're innocent - all your devices get confiscated for months, you probably rack up tens of thousands of dollars in legal fees, maybe lose your job, probably lose your friends and get the boot from any social organizations or groups.
Again, Google and Microsoft have already been doing this for the past decade.
>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account
Scanning content on-server means that a single false positive is sitting there, ready to be maliciously misused by any prosecutor who cares to issue a dragnet warrant.
These sorts of dragnet warrants have become increasingly common.
>Google says geofence warrants make up one-quarter of all US demands
It's not like we haven't seen Google's on-server data hordes misused to falsely accuse users before.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for MURDER and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
Apple's system is designed to protect you from being associated with false positives, until that threshold of 30 matches is reached. Even then, the next step is to have a human review the data.
Google has never been willing to hire human beings to supervise the decisions an algorithm makes.
While I’m generally unhappy that Google never supervises its AI moderation systems, in this case it’s a criminal matter.
Our police and prosecution ought to be enough review on its own. If our own elected government fails to do something so simple, I say fix the government. I don’t want to be forced to rely on the goodwill of a for-profit company.
>Our police and prosecution ought to be enough review on its own.
They are not.
>Innocent man, 23, sues Arizona police for $1.5million after being arrested for MURDER and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime
When I said ought… I mean it in the prescriptive sense not the descriptive.
The government should be held to a high standard, and when it fails we, the people, should fix it and not turn to private companies and ask why they didn’t step up to the plate.
Great advice and great job repeating the manipulative framing of “if you’re not a pedophile, you have nothing to fear.”
Also, if you have anything that may be matched by unknowable and unverifiable matching hashes and algorithms provided by multiple nation states now or ever in the future, including but not limited to political activists, protests, anti-animal-abuse activists, climate activists, and select ethnicities, or copyright violations of any kind… switch off iCloud sync.
Until that switch gets ignored.
This cannot and will not be limited to CSAM. The matching is much more complicated than “hashes of existing images.”
Here’s a good in-depth interview on the tech and the issues.
> Apple also has been doing this for photos uploaded to iCloud as they are not currently encrypted.
Nope. Google and Microsoft have been scanning your entire account for the past decade. Apple has not.
>TechCrunch: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?
Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users’ libraries on cloud services that — as you point out — isn’t something that we’ve ever done
If they get 30 (?) hits then they review the data and then they refer it to law enforcement if the reviewers determine that they were CSAM images. It's not for a single collision and it's not immediately referred to law enforcement. There are still major risks and concerns with this model, but at least describe it correctly.
Why should technology so bad it needs thirty mulligans have the power to completely destroy your life?
And exactly how are they obligated to keep those policies? Answer: they aren't. There isn't some law saying '30 hits before we report you', and Apple is certainly going to drop the number as the public gets more used to the idea of CSAM. They'll keep dropping it until the news articles start coming out about how it's destroying lives.
This is corporate law enforcement. You don't have a right to due process, any say in their policies, or protection via any sort of oversight.
Again, Google and Microsoft have already been scanning everything on your account for the past decade without any such protections against incriminating users based on false positives.
"Before an image is stored in iCloud Photos, the following on-device matching process is performed for that
image against the blinded hash table database."
I'm guessing you haven't been following the issue.
More details in [1], but briefly:
They hash the images that you're uploading to iCloud. If it matches one of the hashes in the database, then it gets encrypted and transmitted to them. No single data packet can be decrypted, they need 30 (?) matches with that database in order to get a decryption key that then allows them to review the uploaded images. They don't send the actual images to the reviewers, it's altered in some way. At that point the reviewer will have 30 (?) thumbnails (?) to review. If the images look like CSAM, then they'll report it to NCMEC who then report it to law enforcement (NCMEC is not, itself, a law enforcement agency).
The ? are because I don't think they've publicly stated (or I've not read) what the threshold for decryption is or how they modify the images that get sent to the reviewers.
> Each file is broken into chunks and encrypted by iCloud using AES128 and a key derived from each chunk’s contents, with the keys using SHA256. The keys and the file’s metadata are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information or the keys, using both Apple and third party storage services—such as Amazon Web Services or Google Cloud Platform—but these partners don’t have the keys to decrypt the user’s data stored on their servers.
As far as I can tell, they don't say anything specific about where or how Apple stores the keys and metadata, so it should be assumed that Apple could decrypt your photos if they wanted to.
End-to-end encryption prevents a third party from reading your content, but if you are getting your encryption software from the same people that are storing your encrypted data, the only thing stopping them from reading your data is corporate policy.
Which is fine, because I use iCloud and many other cloud services, but you have to acknowledge the fact.
iCloud photos are not E2E encrypted. Apple has announced no plans whatsoever to make them such. Apple, their sysadmins, and the government can see every photo you have in iCloud.
Apple had plans (and, an inside source tells me, an implementation) to do E2E for iCloud Backup, but the FBI asked them not to, so they scrapped it:
This undermines the credibility of those who are claiming, without evidence, that this clientside CSAM scanning is a prelude to launching E2E for iCloud data.
Okay, so basically they are just sort of pinky-swearing that your iCloud photos are encrypted on iCloud, but not in any way that prevents Apple or the government from decrypting them anyway.
This raises the followup question of "why bother scanning the images on-device?", but I can infer two fairly obvious answers. First, the encryption still keeps AWS/Azure/GCP from seeing my photos. Second, and more cynically, they'd have to pay to do computation in the cloud; on-device computation is free to them.
> This undermines the credibility of those who are claiming, without evidence, that this clientside CSAM scanning is a prelude to launching E2E for iCloud data.
I agree; this is consistent with my initial point of confusion. Thanks!
> Okay, so basically they are just sort of pinky-swearing that your iCloud photos are encrypted on iCloud, but not in any way that prevents Apple or the government from decrypting them anyway.
How do you imagine that Google and Microsoft are able to scan the entire contents of your account? They can all read the data on their servers
>This raises the followup question of "why bother scanning the images on-device?
Because running the scan on device and encrypting the results protects users from having their account associated with the inevitable false positives that are going to crop up.
Apple can't decrypt the scan results your device produces until the threshold of 30 matching images is reached.
If someone issues a warrant to Apple for every account that has a single match, they can honestly report that they don't have that information.
Google and Microsoft give you no such protection. Any data held on their server is wide open for misuse by anyone who can issue a warrant.
If their end goal is to go full E2E encryption for iCloud Backup, but they have to be able to prove to the FBI first that they are doing "due diligence" to meet warrant needs then of course device-side CSAM scanning is a prelude for being able to turn on E2E for iCloud data!
The fact that the FBI stopped them once before and they've been working to build active solutions to what the FBI tells them their needs are should be evidence alone that E2E is their goal. It seems pretty credible to me.
That's why the CSAM scanner is on your device. It computes the hashes in place on then unencrypted images before uploading encrypted copies to iCloud.
That's why from some perspectives it is a net privacy win versus Google/Microsoft's similar tools that require them to have decryption backdoor keys on their clouds to process these CSAM requests and other FBI/TLA/et al warrants. Apple is saying they don't have backdoor keys at all on iCloud and if they are forced to do CSAM scanning it has to be on device, without leaving the device to have access to the unencrypted images. Only if you hit the reporting threshold (supposedly 30+ hash violations) would it also encrypt copies to a reporting database on iCloud (and again only if you were uploading those photos to iCloud in the first place).
That would also be true for any use of iCloud Photos, no? If you don't trust this, then you also can't trust them to be storing them encrypted on their servers.
More that I need a smart phone for work and life but there are no viable alternatives to iOS and Android, and I trust Apple slightly more than Google not to spy on me and use it against me
They're not a very viable alternative unless you plan on never trying to run any major/popular apps, because they all use Google Play Services APIs/libraries/toolboxes.
Install Google Play Services and Google gets whatever info they want from your phone.
> Two, to slide it into a point release after some minor, inconsequential change to say they "listened to users."
I doubt it will happen. Apple is not known for that sort of interaction. Whatever will happen it will happen silently without Apple admitting to bend down to any backlash.
Also, the pressure to implement device scanning is coming from governments. So it is naive to think Apple will ever surrender. Most probably in the near future every single electronic device will try to leak your data as much as it physically can do.
> Apple is not known for that sort of interaction.
Apple was not known to err on the side of "think of the children" or "let's help catch criminals" instead of personal privacy. But now they're known for new things.
I wonder if upgrading to iOS 15 will increase the chance of receiving this spyware when they do roll it out?
I mean 15.X - 15.Y will likely occur automatically while the phone is connected to WiFi and charging.. but 14 to 15 should require user approval, meaning we should be safe as long as we never upgrade >14..?
They're not "overanalyzing" it. Turning off automatic updates means you miss security updates. Point upgrades are automatic, full versions aren't. Apple is clearly going to backdoor this in a 15.1 or 15.2 release, which means you then can't get any security updates and your only option is to go back to a backup of your device from iOS 14.
I think switching off automatic updates and running a few months behind is the safest plan. There are risks around not getting security updates as fast, but they are probably not large for any individual user.
I’m hoping they’ll realise that they confused privacy and trust and get back on track soon enough.
Considering that they got the algo reverse engineered from 14 (it is already in the code running on all those devices) there seems to be a possibility that a security update could bring it online on 14 as well. Just my speculation but it seems plausible.
If you look at it as reducing their liability for hosting CSAM, then more likely it’ll become a requirement at some point in order to upload your photos to iCloud at all, no matter which version of iOS you’re on.
Or just don't use iCloud photos since the local device scanning for CSAM is limited to the Photos app and only scans prior to upload to iCloud photo library which is easy to turn off.
It's also not too difficult to have your unencrypted photos synced to Google Photos, Dropbox, One Drive or another provider as an alternative. They will scan your photos in the cloud which people on this site seem to have a vastly strong preference for. If you don't trust any of those then you're probably already using NextCloud or something like it.
I smashed the iphone I had into pieces, and I'm wondering what to do with my mac. Maybe install some linux or something, but I don't really know much about that! It'll take me a couple months of reading on it.. I am still using Mojave anyways.
Crushing a device is a normal response from someone who needs to stop hidden device tracking and cannot afford to possibly get it wrong and have some tracking slip through.
it's also worth noting that ios 14 is supposed to get security updates even after ios 15 is released, so if you care about that kind of stuff it's probably better not to upgrade.
I would assume no. People decompile binaries all the time and would likely catch it. It also could introduce dependencies and bugs that would require QA work and dev work.
I mean they just announced it is delayed, didn't they had it enabled in Beta for testing? Could cause more problems if you remove it completely in a rush.