I work for a large software company. I am not on our security team, but have worked with our security team to investigate and resolve reported issues. I agree that it should not be possible to fail this badly.
Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".
A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.
A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.
Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.
Yeah this is particularly the case at Google and Facebook. If you submit a security report to Google or Facebook through their bug bounty programs and escalate it with a critical severity tag (whether or not it's justified), someone on the application security team will review it within an hour. I can say that from experience (on both sides). If it's legitimate a sev:critical vulnerability, a workaround will usually be in production within 24 hours.
I think the intrinsic failure here is that Apple is - more than any other FAANG-like tech company - fundamentally disinterested in vulnerabilities that don't represent root-capable jailbreak vectors. Or rather, they ostensibly care, but every single process is systematically designed to encourage introspection on those vulnerabilities as a categorical imperative. Other types of vulnerabilities will be treated as second class citizens, so to speak. Apple does a lot of things right from a security perspective, but this really isn't one of them in my opinion.
This is very clear despite corporate messaging if you follow along with their bug bounty program. Consider that the bar for submitting a bug bounty to Apple requires the vulnerability to be something capable of compromising the device's sandbox or root privileges. This is explicit - a userland privacy bug is not sufficient. Furthermore, the bug bounty is strictly invite only, and even some of the most accomplished and talented vulnerability researchers in the world are closed out from it: https://twitter.com/i41nbeer/status/1027339893335154688
More generally speaking, a reliable formula for putting a vulnerability in front of someone who is both qualified and paid to urgently care is the following:
1. Look up the security team at the company. Not security contact information, the team.
2. Find out individuals on that team by going through blog posts, conference talks, etc.
3. Find those people on Twitter. Tweet at several of them with the broad strokes: you have a vulnerability in X product, you need to securely report it, you believe it's N severity, how should you do it?
But of course, you shouldn't need to do this. You should be able to fire something off to security@ or, better yet, a bug bounty program.
The biggest problem isn't about the time it took to fix the bug, whether it was a little harder such as iOS, or easier as it is on the backend. It is that Apple, refuse to listen to anything until the media broke the story, and then they go into damage control.
How many times have we heard security researcher or developers cant be bothered with communicating with Apple's blackhole anymore and decide to publish their finding on twitter?
This isn't the first time, and if nothing has changed this surely won't be the last.
Even nonsensical reports to our security address will, with an SLA measured in hours, be read by a qualified human who will then reply to at least say "we're looking into it".
A credible report will be immediately escalated to someone with relevant domain expertise for investigation. The security engineer will attempt to reproduce.
A confirmed report will be escalated to an executive, who will determine urgency. For issues like this, where sensitive data is exposed, people will be woken up and several things will happen in parallel: the scope of the issue will be assessed, the root cause will be found, potential workarounds will be identified, a fix will be implemented, the potential existence of related issues will be investigated, and the reporter will be contacted to assess disclosure risk.
Even in the worst case, where a complicated vulnerability exists in multiple versions of multiple products, requiring multiple patches and backports and requiring coordinated disclosure with partners, I'd expect a fix to be in customers' hands within 14 days.