Eh, I disagree - your definition feels like moving the goalposts.
Apple is under no obligation to host offending content. Check it before it goes in (akin to a security checkpoint in real life, I guess) and then let me move on with my life, knowing it couldn't be arbitrarily vended out to x party.
Any image that would trigger _for this hashing aspect_ would already trigger _if you uploaded it to iCloud where they currently scan it already_. Literally nothing changes for my life, and it opens up a pathway to encrypting iCloud contents.
Feel free to correct me if I'm wrong, but this is a method for decrypting _if it's matching an already known or flagged item_. It's not enabling decrypting arbitrary payloads.
From your link:
>In particular, the server learns the associated payload data for matching images, but learns nothing for non-matching images.
Past this point I'll defer to actual cryptographers (who I'm sure will dissect and write about it), but to me this feels like a decently smart way to go about this.
Apple is under no obligation to host offending content. Check it before it goes in (akin to a security checkpoint in real life, I guess) and then let me move on with my life, knowing it couldn't be arbitrarily vended out to x party.