What if? Well then we'd pull out its power plug. Or build another AI and program it to be sympathetic to our needs. Or any of an infinite number of possible solutions, assuming we were the utter idiots necessary to not implement appropriate safeguards in the first place.
Look, anyone can come up with an infinite number of movie-plot scenarios where naive humans bumbled into technology that then destroyed them. You can say that about almost anything. In understanding DNA, we might accidentally create an unstoppable virus! In conducting space exploration, we may alert a hostile alien civilisation to our presence! In researching chemistry we may create ice-9! Etc etc etc until the end of time.
All of these, including EY's thesis, have one thing in common - the unjustified, massive expansion of a speculative and highly unlikely risk into a reason for retarding progress which would otherwise yield non-speculative, highly likely, extraordinarily beneficial gains.
I was just posing an interesting question, especially as regards to the whole notion of "superior" intelligence. Superior in what regards? By whose measure? Eliezer should be given credit for promoting an information theoretic approach to that question. At least that seems to be a fundamental measure with a good chance of escaping cultural biases concerning "intelligence."
I am certainly not in some simpleminded superior AI is going to kill us all camp. Perhaps we will have very powerful AI optimization tools that have no sense of self, or self-originating volition whatsoever. There would be no reason for such entities to act in their own self interest, and therefore no danger for their interests to conflict with our own. They would have the disadvantage, though, of never coming up with something neat on their own initiative. I think there's enough more than enough initiative from human sources. What's needed is better optimization.
(Of course, only one rogue self-directed AI entity escaping into the wild could possibly -- not certainly -- doom us all. But this is not a new kind of danger. We have been facing that sort of danger -- where one robust and virulent enough example could escape and wreak havoc -- from technologies based on molecular biology for a few years now. So far, so good.)
Look, anyone can come up with an infinite number of movie-plot scenarios where naive humans bumbled into technology that then destroyed them. You can say that about almost anything. In understanding DNA, we might accidentally create an unstoppable virus! In conducting space exploration, we may alert a hostile alien civilisation to our presence! In researching chemistry we may create ice-9! Etc etc etc until the end of time.
All of these, including EY's thesis, have one thing in common - the unjustified, massive expansion of a speculative and highly unlikely risk into a reason for retarding progress which would otherwise yield non-speculative, highly likely, extraordinarily beneficial gains.