> I can't think of a great solution to this problem.
There's really only one "final solution" to the problem in the purely technical realm. That would be to make provable security (in the theorem-proving sense) a non-negotiable requirement to all digital logic (both hardware and software) running on networked devices. I don't know if there's even a workable definition that would rigorously describe the goal of such an effort.
... But I believe that if provable security was important enough to everyone (just like "winning the war" in the 1940s or "getting to the moon" in the 1960's), we might possibly achieve it -- at least below the OS syscall level in a few major OSs and in several important userland libraries.
However, that ignores the human element of security, which can't ever be completely solved via mere human effort. People will always be vulnerable to social engineering, for example.
I think your solution needs to extend to the hardware components on the board.
High security MCUs go through great lengths to defeat sideband attacks on the package (some really neat stuff too like failing if exposed to die shaving).
There are secure bus initiatives but they don't extend to the BOM (bill of materials) for all the components.
On top of that, GUI techniques for obscuring physical input (keyboards, UI touches) are needed.
Given Apple's posturing and patch release cadence, I think/feel they are on the side of privacy. Android too. We're on the right track, I wonder if eventually tech will win the arms race for exploits like this? (The rubber hose exploit will always work...)
If something can be created to be provably secure, then it can be an argument for government legislating a back door.
"You said it's provably secure. Now you can give us provably secure access too without hurting your customer's privacy or security, because they're protected by the 4th amendment."
I don't think this can be solved by technology, I think this comes down to politics of freedom, if you get right down to it. And it looks like you're going to have to have that fight anyway.
The goal of probably secure computing would be merely to (hopefully) extend the mathematical certainties of cryptography to computers and software. The politics of cryptography wouldn’t change, they would only be broadened. Intentional back doors would still be possible, and the ramifications of building them would be just as dire.
So the best provable security could do would be to eliminate security holes like buffer overflow/etc. Trust issues (and even side-channel attacks) would still be present as always.
Well I did say "in the theorem-proving sense", meaning that the code undergoes formal verification. There are programming languages for which each function is a theorem that is proved at compile time. That's what I meant.
There are some low-level libraries that have already been partially converted to theorem-proved functions for the sake of security.
There's really only one "final solution" to the problem in the purely technical realm. That would be to make provable security (in the theorem-proving sense) a non-negotiable requirement to all digital logic (both hardware and software) running on networked devices. I don't know if there's even a workable definition that would rigorously describe the goal of such an effort.
... But I believe that if provable security was important enough to everyone (just like "winning the war" in the 1940s or "getting to the moon" in the 1960's), we might possibly achieve it -- at least below the OS syscall level in a few major OSs and in several important userland libraries.
However, that ignores the human element of security, which can't ever be completely solved via mere human effort. People will always be vulnerable to social engineering, for example.