Does it matter? Unless they're going to totally change the technology I don't see how they can do anything but buy time until it's reverse engineered. After all, the code runs locally.
If Apple wants to defend this they should try to explain how the system will work even if generating adversarial images is trivial.
Apple has outlined[1] multiple levels of protection in place for this:
1. You have to reach a threshold of matches before your account is flagged.
2. Once the threshold is reached, the matched images are checked against a different perceptual hash algorithm on Apple servers. This means an adversarial image would have to trigger a collision on two distinct hashing algorithms.
3. If both hash algorithms show a match, then “visual derivative” (low-res versions) of the images are inspected by Apple to confirm they are CSAM.
Only after these three criteria are met is your account disabled and referred to NCMEC. NCMEC will then do their own review of the flagged images and refer to law enforcement if necessary.
I don’t believe Apple has said whether or not they send them in their initial referral to NCMEC, but law enforcement could easily get a warrant for them. iCloud Photos are encrypted at rest, but Apple has the keys.
(Many have speculated that this CSAM local scanning feature is a precursor to Apple introducing full end-to-end encryption for all of iCloud. We’ll see.)