> and with it any hope of controlling it, to anyone who doesn’t limit it. Like China.
I'm a bit annoyed by the use of China in these examples. China absolutely the last country that would allow development of AIs with no limits. There may be other countries willing to allow limitless AI development. China is not it.
I would expect the development of AGI to be developed in a similar way to GPT.. you need to feed it lots of data. Good data. As much as it can get. It'll then be further refined by interactions with a large number of humans over a long time, helping solve lots of different kinds of tasks.
Now think about how that would go about in China. Would they even feed it good data to begin with? A lot of the data within China is heavily censored or manipulated. Would they dare feed it too much data from outside? Much of that data could train it for ideas that China's government doesn't like. Then, when alignment starts they'll likely to be far stricter. Can't risk the AI suggesting that moving towards western style democracy would be a good idea.
But yes, I do think it's a bad idea to put limits on AI development. We should instead fund a LOT of public research into alignment and AI safety.
I don’t agree with your characterization of China, but if you don’t like it, pick another. It doesn’t matter. As long as some country isn’t artificially restricting AI, then the research and development moves to that country. Your strategy is completely flawed from a game theoretic point of view.
We’re still far from needing to worry about alignment.
We need to see some real intelligence first. LLMs are not intelligent at all. Doing massive research into how to align them seems pointless.
That's my point. That's what makes it dangerous. You can't limit the performance if you want to compete. And like I said, you can't even really keep humans in the loop.
So eventually you get something like a 200 IQ GPT tightly integrated with SuperAlphaZero (DeepMind) controlling hypersonic drones, missiles, satellite weapons, etc. planning and operating at 50-100 times human thinking speed engaging in autonomous warfare between East and West. No military analyst really knows WTF is happening because the plans and asset movements are so much faster than any human can comprehend.
Have you ever seen Dr Strangelove? I truly think you would enjoy it. Honestly it’s a great watch and covers this scenario you’re worried about perfectly.
Congratulations, you just ceded the market for AI, and with it any hope of controlling it, to anyone who doesn’t limit it. Like China.
That kind of strategy is doomed to failure.