well yes that's how should we navigate societal change, out of actual threats and not what ifs. what ifs gave us some nice piece of work legislation before like DMCA, so yeah I'm going to be overly cautious about anything that is emotionally charged instead of data driven.
Are you adjusting your perception of the problem based on fear of a possible solution?
Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?
You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."
I mean digital privacy was not a what-if when the DCMA was written, it and its problems existed long before then. You're conflating business written legislation which is a totally different problem.
Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.
Yes please go ahead and do. We already have laws against endangerment as we have laws against fraud as we did have laws aroubd copyright infringement. No need to cover all what ifs, as I mentioned, unless unwanted behaviour falls between the cracks of the existing frameworks.