Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For anyone wondering, this was supposed to be the release where Apple could scan your photos for child abuse. This was delayed for this release: https://www.techradar.com/news/apple-delays-child-abuse-phot...


Run an on-device scan against a hash database. Using a technology shown to have very frequent collisions.

And then they notify law enforcement if they get a hit. Which means even if you're innocent - all your devices get confiscated for months, you probably rack up tens of thousands of dollars in legal fees, maybe lose your job, probably lose your friends and get the boot from any social organizations or groups.

They're waiting for two things.

One, CSAM to get out of the news cycle and the furor among users about CSAM to die down. This is standard corporate PR "emergency" management practice.

Two, to slide it into a point release after some minor, inconsequential change to say they "listened to users." iPhones with auto-updates enabled won't automatically upgrade to a new major release, but they will happily automatically upgrade to a point release.

You can of course upgrade to iOS 15 and turn off auto-updates, but then you won't get security updates, like the people staying on iOS 14.

Stay on iOS 14 until Apple surrenders completely on this.


> Run an on-device scan against a hash database. Using a technology shown to have very frequent collisions.

Google and Microsoft have been scanning everything in your account against a hash database for the past decade.

Also, unlike Apple's system which doesn't even notify Apple of the first 30 positive results (to protect you from the inevitable false positives) Google and Microsoft offer users no such protection.

>then they notify law enforcement if they get a hit. Which means even if you're innocent - all your devices get confiscated for months, you probably rack up tens of thousands of dollars in legal fees, maybe lose your job, probably lose your friends and get the boot from any social organizations or groups.

Again, Google and Microsoft have already been doing this for the past decade.

>a man [was] arrested on child pornography charges, after Google tipped off authorities about illegal images found in the Houston suspect's Gmail account

https://techcrunch.com/2014/08/06/why-the-gmail-scan-that-le...


Scanning content on a gmail account is not even remotely anything like scanning my device.


Scanning content on-server means that a single false positive is sitting there, ready to be maliciously misused by any prosecutor who cares to issue a dragnet warrant.

These sorts of dragnet warrants have become increasingly common.

>Google says geofence warrants make up one-quarter of all US demands

https://techcrunch.com/2021/08/19/google-geofence-warrants/

It's not like we haven't seen Google's on-server data hordes misused to falsely accuse users before.

>Innocent man, 23, sues Arizona police for $1.5million after being arrested for MURDER and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime

https://www.dailymail.co.uk/news/article-7897319/Police-arre...

Apple's system is designed to protect you from being associated with false positives, until that threshold of 30 matches is reached. Even then, the next step is to have a human review the data.

Google has never been willing to hire human beings to supervise the decisions an algorithm makes.


While I’m generally unhappy that Google never supervises its AI moderation systems, in this case it’s a criminal matter.

Our police and prosecution ought to be enough review on its own. If our own elected government fails to do something so simple, I say fix the government. I don’t want to be forced to rely on the goodwill of a for-profit company.


>Our police and prosecution ought to be enough review on its own.

They are not.

>Innocent man, 23, sues Arizona police for $1.5million after being arrested for MURDER and jailed for six days when Google's GPS tracker wrongly placed him at the scene of the 2018 crime

https://www.dailymail.co.uk/news/article-7897319/Police-arre...


When I said ought… I mean it in the prescriptive sense not the descriptive.

The government should be held to a high standard, and when it fails we, the people, should fix it and not turn to private companies and ask why they didn’t step up to the plate.


the photos are only scanned just before being uploaded to iCloud. If you have CSAM on your phone, just turn off iCloud sync


Great advice and great job repeating the manipulative framing of “if you’re not a pedophile, you have nothing to fear.”

Also, if you have anything that may be matched by unknowable and unverifiable matching hashes and algorithms provided by multiple nation states now or ever in the future, including but not limited to political activists, protests, anti-animal-abuse activists, climate activists, and select ethnicities, or copyright violations of any kind… switch off iCloud sync.

Until that switch gets ignored.

This cannot and will not be limited to CSAM. The matching is much more complicated than “hashes of existing images.”

Here’s a good in-depth interview on the tech and the issues.

https://m.youtube.com/watch?v=9ZZ5erGSKgs


>framing of “if you’re not a pedophile, you have nothing to fear.”

I read it more as "if you don't like it, use another cloud storage solution"


it was a joke.

if you don't trust the switch, then why trust any switch on iOS. Why do you trust that they're not already doing it?

why are you using an iPhone?


Also, if you plan on burying a body in the woods, make sure you turn on Airplane Mode first.


If it's so easy to bypass then it sounds like it's a useless feature that won't prevent the proliferation of CSAM


Apple is worried about content uploaded to their iCloud.

As there is no clear legislation, every company is implementing what they feel comfortable with.


some think they wanted to add this, so that they can enable e2e encryption with iCloud at a later date.


Apple also has been doing this for photos uploaded to iCloud as they are not currently encrypted.


> Apple also has been doing this for photos uploaded to iCloud as they are not currently encrypted.

Nope. Google and Microsoft have been scanning your entire account for the past decade. Apple has not.

>TechCrunch: Most other cloud providers have been scanning for CSAM for some time now. Apple has not. Obviously there are no current regulations that say that you must seek it out on your servers, but there is some roiling regulation in the EU and other countries. Is that the impetus for this? Basically, why now?

Erik Neuenschwander: Why now comes down to the fact that we’ve now got the technology that can balance strong child safety and user privacy. This is an area we’ve been looking at for some time, including current state of the art techniques which mostly involves scanning through entire contents of users’ libraries on cloud services that — as you point out — isn’t something that we’ve ever done

https://techcrunch.com/2021/08/10/interview-apples-head-of-p...


If they get 30 (?) hits then they review the data and then they refer it to law enforcement if the reviewers determine that they were CSAM images. It's not for a single collision and it's not immediately referred to law enforcement. There are still major risks and concerns with this model, but at least describe it correctly.


Why should technology so bad it needs thirty mulligans have the power to completely destroy your life?

And exactly how are they obligated to keep those policies? Answer: they aren't. There isn't some law saying '30 hits before we report you', and Apple is certainly going to drop the number as the public gets more used to the idea of CSAM. They'll keep dropping it until the news articles start coming out about how it's destroying lives.

This is corporate law enforcement. You don't have a right to due process, any say in their policies, or protection via any sort of oversight.


Again, Google and Microsoft have already been scanning everything on your account for the past decade without any such protections against incriminating users based on false positives.


I don’t think that Google’s & Microsoft’s behavior should justify new behavior. Or as the adage goes, Do three wrongs make a right?


Again: scanning my cloud account is not the same thing as scanning my --private devices-- which have no apple cloud sync enabled.

Which part of "what I store on my device is not even remotely the business of the manufacturer of the device" do you not understand?


If you have no sync enabled, they say they're not scanning it. The scan is part of the upload process.


Account is not the same as your device


Apple isn't scanning the contents of your device. They are discussing scanning content you upload to iCloud Photos.

So, yes. It's the same.


"Before an image is stored in iCloud Photos, the following on-device matching process is performed for that image against the blinded hash table database."

https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...

Apple disagrees with you.


Did you just post a quote proving that only images uploaded to iCloud Photos are scanned while claiming that all images on the device are scanned?


> so bad it needs thirty mulligans

idk, Hash collisions?


How do they review the data if your iCloud photos are E2E encrypted?


I'm guessing you haven't been following the issue.

More details in [1], but briefly:

They hash the images that you're uploading to iCloud. If it matches one of the hashes in the database, then it gets encrypted and transmitted to them. No single data packet can be decrypted, they need 30 (?) matches with that database in order to get a decryption key that then allows them to review the uploaded images. They don't send the actual images to the reviewers, it's altered in some way. At that point the reviewer will have 30 (?) thumbnails (?) to review. If the images look like CSAM, then they'll report it to NCMEC who then report it to law enforcement (NCMEC is not, itself, a law enforcement agency).

The ? are because I don't think they've publicly stated (or I've not read) what the threshold for decryption is or how they modify the images that get sent to the reviewers.

[0] https://www.apple.com/child-safety/

[1] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...


> I'm guessing you haven't been following the issue.

Not as closely as some people. That's why I asked the question in the first place. But thanks for answering.


Let the reader understand.

(i.e. They're encrypted in transfer and while stored, but Apple holds the keys: https://qr.ae/pGSHY8, https://manuals.info.apple.com/MANUALS/1000/MA1902/en_US/app... [search for 'iCloud'])


For the lazy:

> Each file is broken into chunks and encrypted by iCloud using AES128 and a key derived from each chunk’s contents, with the keys using SHA256. The keys and the file’s metadata are stored by Apple in the user’s iCloud account. The encrypted chunks of the file are stored, without any user-identifying information or the keys, using both Apple and third party storage services—such as Amazon Web Services or Google Cloud Platform—but these partners don’t have the keys to decrypt the user’s data stored on their servers.

As far as I can tell, they don't say anything specific about where or how Apple stores the keys and metadata, so it should be assumed that Apple could decrypt your photos if they wanted to.


End-to-end encryption prevents a third party from reading your content, but if you are getting your encryption software from the same people that are storing your encrypted data, the only thing stopping them from reading your data is corporate policy.

Which is fine, because I use iCloud and many other cloud services, but you have to acknowledge the fact.


iCloud photos are not E2E encrypted. Apple has announced no plans whatsoever to make them such. Apple, their sysadmins, and the government can see every photo you have in iCloud.

Apple had plans (and, an inside source tells me, an implementation) to do E2E for iCloud Backup, but the FBI asked them not to, so they scrapped it:

https://www.reuters.com/article/us-apple-fbi-icloud-exclusiv...

This undermines the credibility of those who are claiming, without evidence, that this clientside CSAM scanning is a prelude to launching E2E for iCloud data.


Okay, so basically they are just sort of pinky-swearing that your iCloud photos are encrypted on iCloud, but not in any way that prevents Apple or the government from decrypting them anyway.

This raises the followup question of "why bother scanning the images on-device?", but I can infer two fairly obvious answers. First, the encryption still keeps AWS/Azure/GCP from seeing my photos. Second, and more cynically, they'd have to pay to do computation in the cloud; on-device computation is free to them.

> This undermines the credibility of those who are claiming, without evidence, that this clientside CSAM scanning is a prelude to launching E2E for iCloud data.

I agree; this is consistent with my initial point of confusion. Thanks!


> Okay, so basically they are just sort of pinky-swearing that your iCloud photos are encrypted on iCloud, but not in any way that prevents Apple or the government from decrypting them anyway.

How do you imagine that Google and Microsoft are able to scan the entire contents of your account? They can all read the data on their servers

>This raises the followup question of "why bother scanning the images on-device?

Because running the scan on device and encrypting the results protects users from having their account associated with the inevitable false positives that are going to crop up.

Apple can't decrypt the scan results your device produces until the threshold of 30 matching images is reached.

If someone issues a warrant to Apple for every account that has a single match, they can honestly report that they don't have that information.

Google and Microsoft give you no such protection. Any data held on their server is wide open for misuse by anyone who can issue a warrant.


I’m not arguing with anyone or expressing any opinion.


If their end goal is to go full E2E encryption for iCloud Backup, but they have to be able to prove to the FBI first that they are doing "due diligence" to meet warrant needs then of course device-side CSAM scanning is a prelude for being able to turn on E2E for iCloud data!

The fact that the FBI stopped them once before and they've been working to build active solutions to what the FBI tells them their needs are should be evidence alone that E2E is their goal. It seems pretty credible to me.


They're only encrypted when on iCloud, not when on your device. The hashes are computed on your device.


So how do they capture the unencrypted images from my device for "review"?


That's why the CSAM scanner is on your device. It computes the hashes in place on then unencrypted images before uploading encrypted copies to iCloud.

That's why from some perspectives it is a net privacy win versus Google/Microsoft's similar tools that require them to have decryption backdoor keys on their clouds to process these CSAM requests and other FBI/TLA/et al warrants. Apple is saying they don't have backdoor keys at all on iCloud and if they are forced to do CSAM scanning it has to be on device, without leaving the device to have access to the unencrypted images. Only if you hit the reporting threshold (supposedly 30+ hash violations) would it also encrypt copies to a reporting database on iCloud (and again only if you were uploading those photos to iCloud in the first place).


So, again, how do they review those images? Does Apple have the key to the reporting database?


Yes, that would be why it sends copies, encrypted with a different key to the users' own storage keys.


So in other words, the only thing stopping Apple from viewing my iCloud photos is an if statement.


That would also be true for any use of iCloud Photos, no? If you don't trust this, then you also can't trust them to be storing them encrypted on their servers.


>Stay on iOS 14 until Apple surrenders completely on this

Or vote with your wallet and abandon the Apple ecosystem. But nobody will because they don't have the bollocks.


More that I need a smart phone for work and life but there are no viable alternatives to iOS and Android, and I trust Apple slightly more than Google not to spy on me and use it against me


On many Android devices you can flash an AOSP RON which is a very viable alternative


They're not a very viable alternative unless you plan on never trying to run any major/popular apps, because they all use Google Play Services APIs/libraries/toolboxes.

Install Google Play Services and Google gets whatever info they want from your phone.


Literally this


> Two, to slide it into a point release after some minor, inconsequential change to say they "listened to users."

I doubt it will happen. Apple is not known for that sort of interaction. Whatever will happen it will happen silently without Apple admitting to bend down to any backlash.

Also, the pressure to implement device scanning is coming from governments. So it is naive to think Apple will ever surrender. Most probably in the near future every single electronic device will try to leak your data as much as it physically can do.


> Apple is not known for that sort of interaction.

Apple was not known to err on the side of "think of the children" or "let's help catch criminals" instead of personal privacy. But now they're known for new things.


I wonder if upgrading to iOS 15 will increase the chance of receiving this spyware when they do roll it out?

I mean 15.X - 15.Y will likely occur automatically while the phone is connected to WiFi and charging.. but 14 to 15 should require user approval, meaning we should be safe as long as we never upgrade >14..?


Either turn off automatic updates or don't. You're overanalyzing this.


They're not "overanalyzing" it. Turning off automatic updates means you miss security updates. Point upgrades are automatic, full versions aren't. Apple is clearly going to backdoor this in a 15.1 or 15.2 release, which means you then can't get any security updates and your only option is to go back to a backup of your device from iOS 14.


Apple will allow you to stay on iOS 14 and still receive automatic security updates.

https://arstechnica.com/gadgets/2021/09/psa-you-dont-have-to...


I think switching off automatic updates and running a few months behind is the safest plan. There are risks around not getting security updates as fast, but they are probably not large for any individual user.

I’m hoping they’ll realise that they confused privacy and trust and get back on track soon enough.


Considering that they got the algo reverse engineered from 14 (it is already in the code running on all those devices) there seems to be a possibility that a security update could bring it online on 14 as well. Just my speculation but it seems plausible.


I honestly doubt it. They will likely roll this out to both latest versions. I think they said as much.


If you look at it as reducing their liability for hosting CSAM, then more likely it’ll become a requirement at some point in order to upload your photos to iCloud at all, no matter which version of iOS you’re on.


Or just don't use iCloud photos since the local device scanning for CSAM is limited to the Photos app and only scans prior to upload to iCloud photo library which is easy to turn off.

It's also not too difficult to have your unencrypted photos synced to Google Photos, Dropbox, One Drive or another provider as an alternative. They will scan your photos in the cloud which people on this site seem to have a vastly strong preference for. If you don't trust any of those then you're probably already using NextCloud or something like it.


> Or just don't use iCloud photos since the local device scanning for CSAM is limited to the Photos app and only scans prior to upload to iCloud photo

For now. It will spread.


Turn off uploading photos to iCloud, and then if they start rolling it out, disable automatic updates.


Yes if you agree to iCloud terms as far as I can tell.


I smashed the iphone I had into pieces, and I'm wondering what to do with my mac. Maybe install some linux or something, but I don't really know much about that! It'll take me a couple months of reading on it.. I am still using Mojave anyways.


That's... overly dramatic.


Crushing a device is a normal response from someone who needs to stop hidden device tracking and cannot afford to possibly get it wrong and have some tracking slip through.


Will never be as dramatic as spying over hundreds of millions of sheeple!!


Use the pieces of the iPhone to smash the Mac


it's also worth noting that ios 14 is supposed to get security updates even after ios 15 is released, so if you care about that kind of stuff it's probably better not to upgrade.

edit: more info in sibling comment https://news.ycombinator.com/item?id=28596442


Did they removed the code? commented it or defaulted it to off for now but the code is there and ready to scan.


I would assume no. People decompile binaries all the time and would likely catch it. It also could introduce dependencies and bugs that would require QA work and dev work.


I mean they just announced it is delayed, didn't they had it enabled in Beta for testing? Could cause more problems if you remove it completely in a rush.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: