I am all for AI research and integrating more AI use into society. Currently working on tools based on GPT. I think it has incredible potential to help humans.
But at the same time, I am sure that AI does not need to have a hard takeoff to be extremely dangerous. It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
That will be dangerous if we aren't cautious. We should start thinking now about limiting the performance of AI hardware. The challenge is that increasing the speed is such a competitive advantage, it creates a race. That is a concern when you put it into a military context.
The CEO of Palantir has already called for a Manhattan Project for superintelligent AI weapons control systems.
To see how “AI” can be dangerous just look how social media bubble/recommendation algorithms would radicalize people, even with much cruder ML. People tend to miss that it’s not about some model starting to “think” and act all sci-fi evil, just us humans applying powerful tech in irresponsible ways that we either don’t bother to assess due to lack of awareness or assess positively due to a conflict of interest (money, career, etc.) is already enough to cause trouble.
Right. This has been my take since ChatGPT hit the scene.
I'm not really afraid of Skynet or AM. What worries me is that AI will accelerate the enshittification of everything as it gets baked into everything.
Think of the experience of trying to get any kind of help from a giant company with call centers. You have some legitimate problem or grievance. Any thinking person would agree that a resolution is in order.
But you're not allowed to access a thinking person. What you get instead is some wage slave who has to follow a script. The script doesn't have your situation in it. They don't have some button they can press to solve your problem. There's a gigantic bureaucracy between you and the person who could fix your issue. Eventually you may just give up and mark your issue down as unsolvable.
This is already a realistic situation today, but now imagine that there's a new layer in front of all this where you have to convince an LLM your problem is worth considering. Or imagine the human on the other end has to try to send your request through some AI system.
It's not going to launch all the nukes, or construct nanofactories to make a plague. It's just going to get in your way, serve you garbage, frustrate you, and make the world a slightly worse place.
Maybe in the far future it could be more of a Skynet situation, I dunno. But between then and now there will be plenty of low level annoyance for anyone having to deal with these systems.
I'm not as cynical as this post reads - I think just like the whole internet there are still opportunities for good here, for people's lives to be improved by technology. But our incentives right now sure encourage the worse scenario.
I’m less about enshittification or literal Skynet and more about casually and indirectly causing a major destabilization with some seemingly mundane application of fancy tech that devs didn’t bother to think through about the implications of, or were so amazed they could do something they didn’t think about whether they should.
Just look up any serious, non-quacky overview about how it works. If that’s excessive, just consider the empirical fact that we have created it (that’s also why we know exactly how it works), that might be enough.
If you really think so, I wouldn’t sleep at night if I were you. Thankfully, we actually understand fairly well how it works (cf. the “we’ve built it” part). That we may not know exactly how any particular output is produced is what sometimes happens when you build things that behave non-deterministically. Perhaps you give RNGs and bugs an undeserved aura of mystery.
We understand how Transformers "learn", ie. how the mechanism of its action operates. However, except for the most basic of cases, we don't understand how Transformers operate the skills they've acquired and can demonstrate, at all. See the field of interpretability for early attempts to change this.
For example, if you train a large network on lots of languages, and then fine-tune it to listen to instructions in English, it will, on its own, also listen to instructions in every other language it was trained on. Nobody knows why this is the case.
> Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
We've already had this for decades. You're just describing computers.
If we give these systems unrestricted access to infrastructure/resources, and something bad happens, that's not the system's fault. It's our fault.
I am not a doomer, but based on the current state of AI, I can't say I'm very optimistic that we'll get this right. We actually do know how to solve this problem, but there is so much magical thinking and grift in this space that I don't think our prior experiences matter.
I think we’re missing the real “danger”. Trusting and relying on AI too much. Adopting a “good enough” attitude and deploying AI to handle scale while letting many things fall through the cracks.
Much like outsourcing and stripping customer service to the bare minimum. For many products and services it essentially doesn’t exist - it handles the most common cases and is a pain to use. It takes a long time to get a human, if ever. Now take that further and apply it to everything. And not just replacing human labor, but bespoke software (like all software up to this point).
> It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
I don't think this will work because cost of improvement with current method is exponential and we're already at capacity with HW.
Since GPT4 I've seen it regress in attempts commercialise and any alternative I'm aware of isn't even close. Depends on the timescale are we talking about ?
> and with it any hope of controlling it, to anyone who doesn’t limit it. Like China.
I'm a bit annoyed by the use of China in these examples. China absolutely the last country that would allow development of AIs with no limits. There may be other countries willing to allow limitless AI development. China is not it.
I would expect the development of AGI to be developed in a similar way to GPT.. you need to feed it lots of data. Good data. As much as it can get. It'll then be further refined by interactions with a large number of humans over a long time, helping solve lots of different kinds of tasks.
Now think about how that would go about in China. Would they even feed it good data to begin with? A lot of the data within China is heavily censored or manipulated. Would they dare feed it too much data from outside? Much of that data could train it for ideas that China's government doesn't like. Then, when alignment starts they'll likely to be far stricter. Can't risk the AI suggesting that moving towards western style democracy would be a good idea.
But yes, I do think it's a bad idea to put limits on AI development. We should instead fund a LOT of public research into alignment and AI safety.
I don’t agree with your characterization of China, but if you don’t like it, pick another. It doesn’t matter. As long as some country isn’t artificially restricting AI, then the research and development moves to that country. Your strategy is completely flawed from a game theoretic point of view.
We’re still far from needing to worry about alignment.
We need to see some real intelligence first. LLMs are not intelligent at all. Doing massive research into how to align them seems pointless.
That's my point. That's what makes it dangerous. You can't limit the performance if you want to compete. And like I said, you can't even really keep humans in the loop.
So eventually you get something like a 200 IQ GPT tightly integrated with SuperAlphaZero (DeepMind) controlling hypersonic drones, missiles, satellite weapons, etc. planning and operating at 50-100 times human thinking speed engaging in autonomous warfare between East and West. No military analyst really knows WTF is happening because the plans and asset movements are so much faster than any human can comprehend.
Have you ever seen Dr Strangelove? I truly think you would enjoy it. Honestly it’s a great watch and covers this scenario you’re worried about perfectly.
But at the same time, I am sure that AI does not need to have a hard takeoff to be extremely dangerous. It just needs to get a bit smarter and somewhat faster every few months. Within less than a decade we will have systems that output "thoughts" and actions at least dozens of times faster than humans.
That will be dangerous if we aren't cautious. We should start thinking now about limiting the performance of AI hardware. The challenge is that increasing the speed is such a competitive advantage, it creates a race. That is a concern when you put it into a military context.
The CEO of Palantir has already called for a Manhattan Project for superintelligent AI weapons control systems.